00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 628 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3293 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.090 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.091 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.096 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.139 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.178 Using shallow fetch with depth 1 00:00:00.178 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.215 > git --version # 'git version 2.39.2' 00:00:00.215 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.246 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.246 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.341 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.368 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.381 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.381 > git config core.sparsecheckout # timeout=10 00:00:04.393 > git read-tree -mu HEAD # timeout=10 00:00:04.411 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.454 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.455 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.554 [Pipeline] Start of Pipeline 00:00:04.565 [Pipeline] library 00:00:04.566 Loading library shm_lib@master 00:00:04.566 Library shm_lib@master is cached. Copying from home. 00:00:04.582 [Pipeline] node 00:00:04.588 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.591 [Pipeline] { 00:00:04.603 [Pipeline] catchError 00:00:04.604 [Pipeline] { 00:00:04.616 [Pipeline] wrap 00:00:04.624 [Pipeline] { 00:00:04.630 [Pipeline] stage 00:00:04.631 [Pipeline] { (Prologue) 00:00:04.804 [Pipeline] sh 00:00:05.090 + logger -p user.info -t JENKINS-CI 00:00:05.110 [Pipeline] echo 00:00:05.112 Node: WFP8 00:00:05.120 [Pipeline] sh 00:00:05.425 [Pipeline] setCustomBuildProperty 00:00:05.437 [Pipeline] echo 00:00:05.440 Cleanup processes 00:00:05.445 [Pipeline] sh 00:00:05.728 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.728 3268224 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.738 [Pipeline] sh 00:00:06.020 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.020 ++ grep -v 'sudo pgrep' 00:00:06.020 ++ awk '{print $1}' 00:00:06.020 + sudo kill -9 00:00:06.020 + true 00:00:06.040 [Pipeline] cleanWs 00:00:06.052 [WS-CLEANUP] Deleting project workspace... 00:00:06.052 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.059 [WS-CLEANUP] done 00:00:06.064 [Pipeline] setCustomBuildProperty 00:00:06.084 [Pipeline] sh 00:00:07.384 + sudo git config --global --replace-all safe.directory '*' 00:00:07.522 [Pipeline] httpRequest 00:00:07.547 [Pipeline] echo 00:00:07.548 Sorcerer 10.211.164.101 is alive 00:00:07.555 [Pipeline] httpRequest 00:00:07.559 HttpMethod: GET 00:00:07.559 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.560 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.574 Response Code: HTTP/1.1 200 OK 00:00:07.574 Success: Status code 200 is in the accepted range: 200,404 00:00:07.574 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.460 [Pipeline] sh 00:00:09.749 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.765 [Pipeline] httpRequest 00:00:09.804 [Pipeline] echo 00:00:09.806 Sorcerer 10.211.164.101 is alive 00:00:09.815 [Pipeline] httpRequest 00:00:09.820 HttpMethod: GET 00:00:09.820 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:09.821 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:09.836 Response Code: HTTP/1.1 200 OK 00:00:09.837 Success: Status code 200 is in the accepted range: 200,404 00:00:09.837 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:02.454 [Pipeline] sh 00:01:02.735 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:05.285 [Pipeline] sh 00:01:05.569 + git -C spdk log --oneline -n5 00:01:05.569 dbef7efac test: fix dpdk builds on ubuntu24 00:01:05.569 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:05.569 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:05.569 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:05.569 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:05.588 [Pipeline] withCredentials 00:01:05.598 > git --version # timeout=10 00:01:05.611 > git --version # 'git version 2.39.2' 00:01:05.631 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:05.634 [Pipeline] { 00:01:05.644 [Pipeline] retry 00:01:05.646 [Pipeline] { 00:01:05.666 [Pipeline] sh 00:01:05.954 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:07.877 [Pipeline] } 00:01:07.902 [Pipeline] // retry 00:01:07.908 [Pipeline] } 00:01:07.932 [Pipeline] // withCredentials 00:01:07.946 [Pipeline] httpRequest 00:01:07.964 [Pipeline] echo 00:01:07.966 Sorcerer 10.211.164.101 is alive 00:01:07.976 [Pipeline] httpRequest 00:01:07.980 HttpMethod: GET 00:01:07.981 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.981 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:07.984 Response Code: HTTP/1.1 200 OK 00:01:07.985 Success: Status code 200 is in the accepted range: 200,404 00:01:07.985 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:11.808 [Pipeline] sh 00:01:12.089 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:13.481 [Pipeline] sh 00:01:13.771 + git -C dpdk log --oneline -n5 00:01:13.771 caf0f5d395 version: 22.11.4 00:01:13.771 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:13.771 dc9c799c7d vhost: fix missing spinlock unlock 00:01:13.771 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:13.771 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:13.783 [Pipeline] } 00:01:13.804 [Pipeline] // stage 00:01:13.815 [Pipeline] stage 00:01:13.817 [Pipeline] { (Prepare) 00:01:13.840 [Pipeline] writeFile 00:01:13.860 [Pipeline] sh 00:01:14.144 + logger -p user.info -t JENKINS-CI 00:01:14.158 [Pipeline] sh 00:01:14.448 + logger -p user.info -t JENKINS-CI 00:01:14.462 [Pipeline] sh 00:01:14.748 + cat autorun-spdk.conf 00:01:14.748 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.748 SPDK_TEST_NVMF=1 00:01:14.748 SPDK_TEST_NVME_CLI=1 00:01:14.748 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:14.748 SPDK_TEST_NVMF_NICS=e810 00:01:14.748 SPDK_TEST_VFIOUSER=1 00:01:14.748 SPDK_RUN_UBSAN=1 00:01:14.748 NET_TYPE=phy 00:01:14.748 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:14.748 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:14.755 RUN_NIGHTLY=1 00:01:14.763 [Pipeline] readFile 00:01:14.795 [Pipeline] withEnv 00:01:14.798 [Pipeline] { 00:01:14.815 [Pipeline] sh 00:01:15.103 + set -ex 00:01:15.103 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:15.103 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.103 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.103 ++ SPDK_TEST_NVMF=1 00:01:15.103 ++ SPDK_TEST_NVME_CLI=1 00:01:15.103 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.103 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.103 ++ SPDK_TEST_VFIOUSER=1 00:01:15.103 ++ SPDK_RUN_UBSAN=1 00:01:15.103 ++ NET_TYPE=phy 00:01:15.103 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:15.103 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.103 ++ RUN_NIGHTLY=1 00:01:15.103 + case $SPDK_TEST_NVMF_NICS in 00:01:15.103 + DRIVERS=ice 00:01:15.103 + [[ tcp == \r\d\m\a ]] 00:01:15.103 + [[ -n ice ]] 00:01:15.103 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:15.103 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:15.103 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:15.103 rmmod: ERROR: Module irdma is not currently loaded 00:01:15.103 rmmod: ERROR: Module i40iw is not currently loaded 00:01:15.103 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:15.103 + true 00:01:15.104 + for D in $DRIVERS 00:01:15.104 + sudo modprobe ice 00:01:15.104 + exit 0 00:01:15.123 [Pipeline] } 00:01:15.144 [Pipeline] // withEnv 00:01:15.151 [Pipeline] } 00:01:15.169 [Pipeline] // stage 00:01:15.179 [Pipeline] catchError 00:01:15.181 [Pipeline] { 00:01:15.197 [Pipeline] timeout 00:01:15.197 Timeout set to expire in 50 min 00:01:15.198 [Pipeline] { 00:01:15.211 [Pipeline] stage 00:01:15.212 [Pipeline] { (Tests) 00:01:15.229 [Pipeline] sh 00:01:15.545 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.545 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.545 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.545 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:15.545 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.545 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:15.545 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:15.545 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:15.545 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:15.545 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:15.545 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:15.545 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:15.545 + source /etc/os-release 00:01:15.546 ++ NAME='Fedora Linux' 00:01:15.546 ++ VERSION='38 (Cloud Edition)' 00:01:15.546 ++ ID=fedora 00:01:15.546 ++ VERSION_ID=38 00:01:15.546 ++ VERSION_CODENAME= 00:01:15.546 ++ PLATFORM_ID=platform:f38 00:01:15.546 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:15.546 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:15.546 ++ LOGO=fedora-logo-icon 00:01:15.546 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:15.546 ++ HOME_URL=https://fedoraproject.org/ 00:01:15.546 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:15.546 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:15.546 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:15.546 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:15.546 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:15.546 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:15.546 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:15.546 ++ SUPPORT_END=2024-05-14 00:01:15.546 ++ VARIANT='Cloud Edition' 00:01:15.546 ++ VARIANT_ID=cloud 00:01:15.546 + uname -a 00:01:15.546 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:15.546 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:18.087 Hugepages 00:01:18.087 node hugesize free / total 00:01:18.087 node0 1048576kB 0 / 0 00:01:18.087 node0 2048kB 0 / 0 00:01:18.087 node1 1048576kB 0 / 0 00:01:18.087 node1 2048kB 0 / 0 00:01:18.087 00:01:18.087 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.087 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:18.087 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:18.087 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:18.087 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:18.087 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:18.087 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:18.087 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:18.087 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:18.087 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:18.087 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:18.087 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:18.087 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:18.087 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:18.087 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:18.087 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:18.087 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:18.087 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:18.087 + rm -f /tmp/spdk-ld-path 00:01:18.087 + source autorun-spdk.conf 00:01:18.087 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.087 ++ SPDK_TEST_NVMF=1 00:01:18.087 ++ SPDK_TEST_NVME_CLI=1 00:01:18.087 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.087 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.087 ++ SPDK_TEST_VFIOUSER=1 00:01:18.087 ++ SPDK_RUN_UBSAN=1 00:01:18.087 ++ NET_TYPE=phy 00:01:18.087 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:18.087 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.087 ++ RUN_NIGHTLY=1 00:01:18.087 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.087 + [[ -n '' ]] 00:01:18.087 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.087 + for M in /var/spdk/build-*-manifest.txt 00:01:18.087 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.087 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.087 + for M in /var/spdk/build-*-manifest.txt 00:01:18.087 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.087 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.087 ++ uname 00:01:18.087 + [[ Linux == \L\i\n\u\x ]] 00:01:18.087 + sudo dmesg -T 00:01:18.087 + sudo dmesg --clear 00:01:18.087 + dmesg_pid=3269682 00:01:18.087 + [[ Fedora Linux == FreeBSD ]] 00:01:18.087 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.087 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.087 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.087 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:18.087 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:18.087 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.087 + sudo dmesg -Tw 00:01:18.087 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.087 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.087 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.087 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.087 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.087 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.087 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.087 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.087 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.087 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.087 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.087 Test configuration: 00:01:18.087 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.087 SPDK_TEST_NVMF=1 00:01:18.087 SPDK_TEST_NVME_CLI=1 00:01:18.087 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.087 SPDK_TEST_NVMF_NICS=e810 00:01:18.087 SPDK_TEST_VFIOUSER=1 00:01:18.087 SPDK_RUN_UBSAN=1 00:01:18.087 NET_TYPE=phy 00:01:18.087 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:18.087 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.087 RUN_NIGHTLY=1 22:00:13 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:18.087 22:00:13 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.087 22:00:13 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.087 22:00:13 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.087 22:00:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.087 22:00:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.087 22:00:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.087 22:00:13 -- paths/export.sh@5 -- $ export PATH 00:01:18.087 22:00:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.088 22:00:13 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:18.088 22:00:13 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:18.088 22:00:13 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721851213.XXXXXX 00:01:18.088 22:00:13 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721851213.cQaAPA 00:01:18.088 22:00:13 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:18.088 22:00:13 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:01:18.088 22:00:13 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.088 22:00:13 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:18.088 22:00:13 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:18.088 22:00:13 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.088 22:00:13 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:18.088 22:00:13 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:18.088 22:00:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.088 22:00:13 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:18.088 22:00:13 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:18.088 22:00:13 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:18.088 22:00:13 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.088 22:00:13 -- spdk/autobuild.sh@16 -- $ date -u 00:01:18.088 Wed Jul 24 08:00:13 PM UTC 2024 00:01:18.088 22:00:13 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:18.353 LTS-60-gdbef7efac 00:01:18.354 22:00:13 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:18.354 22:00:13 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:18.354 22:00:13 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:18.354 22:00:13 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:18.354 22:00:13 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:18.354 22:00:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.354 ************************************ 00:01:18.354 START TEST ubsan 00:01:18.354 ************************************ 00:01:18.354 22:00:13 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:18.354 using ubsan 00:01:18.354 00:01:18.354 real 0m0.000s 00:01:18.354 user 0m0.000s 00:01:18.354 sys 0m0.000s 00:01:18.354 22:00:13 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:18.354 22:00:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.354 ************************************ 00:01:18.354 END TEST ubsan 00:01:18.354 ************************************ 00:01:18.354 22:00:13 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:18.354 22:00:13 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:18.354 22:00:13 -- common/autobuild_common.sh@430 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:18.354 22:00:13 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:18.354 22:00:13 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:18.354 22:00:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.354 ************************************ 00:01:18.354 START TEST build_native_dpdk 00:01:18.354 ************************************ 00:01:18.354 22:00:13 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:18.354 22:00:13 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:18.354 22:00:13 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:18.354 22:00:13 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:18.354 22:00:13 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:18.355 22:00:13 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:18.355 22:00:13 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:18.355 22:00:13 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:18.355 22:00:13 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:18.355 22:00:13 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:18.355 22:00:13 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:18.355 22:00:13 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:18.355 22:00:13 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:18.355 22:00:13 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:18.355 22:00:13 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:18.355 22:00:13 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.355 22:00:13 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:18.355 22:00:13 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:18.355 22:00:13 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:18.355 22:00:13 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.355 22:00:13 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:18.355 caf0f5d395 version: 22.11.4 00:01:18.355 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:18.355 dc9c799c7d vhost: fix missing spinlock unlock 00:01:18.355 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:18.355 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:18.355 22:00:13 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:18.355 22:00:13 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:18.355 22:00:13 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:18.355 22:00:13 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:18.355 22:00:13 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:18.355 22:00:13 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:18.355 22:00:13 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:18.355 22:00:13 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:18.355 22:00:13 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:18.355 22:00:13 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:18.355 22:00:13 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:18.355 22:00:13 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:18.356 22:00:13 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:18.356 22:00:13 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:18.356 22:00:13 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:18.356 22:00:13 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:18.356 22:00:13 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:18.356 22:00:13 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:18.356 22:00:13 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:18.356 22:00:13 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:18.356 22:00:13 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:18.356 22:00:13 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:18.356 22:00:13 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:18.356 22:00:13 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:18.356 22:00:13 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:18.356 22:00:13 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:18.356 22:00:13 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:18.356 22:00:13 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:18.356 22:00:13 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:18.356 22:00:13 -- scripts/common.sh@343 -- $ case "$op" in 00:01:18.356 22:00:13 -- scripts/common.sh@344 -- $ : 1 00:01:18.356 22:00:13 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:18.356 22:00:13 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:18.356 22:00:13 -- scripts/common.sh@364 -- $ decimal 22 00:01:18.356 22:00:13 -- scripts/common.sh@352 -- $ local d=22 00:01:18.356 22:00:13 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:18.356 22:00:13 -- scripts/common.sh@354 -- $ echo 22 00:01:18.356 22:00:13 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:18.356 22:00:13 -- scripts/common.sh@365 -- $ decimal 21 00:01:18.356 22:00:13 -- scripts/common.sh@352 -- $ local d=21 00:01:18.356 22:00:13 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:18.356 22:00:13 -- scripts/common.sh@354 -- $ echo 21 00:01:18.356 22:00:13 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:18.356 22:00:13 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:18.357 22:00:13 -- scripts/common.sh@366 -- $ return 1 00:01:18.357 22:00:13 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:18.357 patching file config/rte_config.h 00:01:18.357 Hunk #1 succeeded at 60 (offset 1 line). 00:01:18.357 22:00:13 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:18.357 22:00:13 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:18.357 22:00:13 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:18.357 22:00:13 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:18.357 22:00:13 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:18.357 22:00:13 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:18.357 22:00:13 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:18.357 22:00:13 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:18.357 22:00:13 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:18.357 22:00:13 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:18.357 22:00:13 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:18.357 22:00:13 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:18.357 22:00:13 -- scripts/common.sh@343 -- $ case "$op" in 00:01:18.357 22:00:13 -- scripts/common.sh@344 -- $ : 1 00:01:18.357 22:00:13 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:18.357 22:00:13 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:18.357 22:00:13 -- scripts/common.sh@364 -- $ decimal 22 00:01:18.357 22:00:13 -- scripts/common.sh@352 -- $ local d=22 00:01:18.357 22:00:13 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:18.357 22:00:13 -- scripts/common.sh@354 -- $ echo 22 00:01:18.357 22:00:13 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:18.357 22:00:13 -- scripts/common.sh@365 -- $ decimal 24 00:01:18.357 22:00:13 -- scripts/common.sh@352 -- $ local d=24 00:01:18.357 22:00:13 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:18.357 22:00:13 -- scripts/common.sh@354 -- $ echo 24 00:01:18.357 22:00:13 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:18.357 22:00:13 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:18.357 22:00:13 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:18.358 22:00:13 -- scripts/common.sh@367 -- $ return 0 00:01:18.358 22:00:13 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:18.358 patching file lib/pcapng/rte_pcapng.c 00:01:18.358 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:18.358 22:00:13 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:18.358 22:00:13 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:18.358 22:00:13 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:18.358 22:00:13 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:18.358 22:00:13 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:22.563 The Meson build system 00:01:22.563 Version: 1.3.1 00:01:22.563 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:22.563 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:22.563 Build type: native build 00:01:22.563 Program cat found: YES (/usr/bin/cat) 00:01:22.563 Project name: DPDK 00:01:22.563 Project version: 22.11.4 00:01:22.563 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:22.563 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:22.563 Host machine cpu family: x86_64 00:01:22.563 Host machine cpu: x86_64 00:01:22.563 Message: ## Building in Developer Mode ## 00:01:22.563 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:22.563 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:22.563 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:22.563 Program objdump found: YES (/usr/bin/objdump) 00:01:22.563 Program python3 found: YES (/usr/bin/python3) 00:01:22.563 Program cat found: YES (/usr/bin/cat) 00:01:22.563 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:22.563 Checking for size of "void *" : 8 00:01:22.563 Checking for size of "void *" : 8 (cached) 00:01:22.563 Library m found: YES 00:01:22.563 Library numa found: YES 00:01:22.563 Has header "numaif.h" : YES 00:01:22.563 Library fdt found: NO 00:01:22.563 Library execinfo found: NO 00:01:22.563 Has header "execinfo.h" : YES 00:01:22.563 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:22.563 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:22.563 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:22.563 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:22.563 Run-time dependency openssl found: YES 3.0.9 00:01:22.563 Run-time dependency libpcap found: YES 1.10.4 00:01:22.563 Has header "pcap.h" with dependency libpcap: YES 00:01:22.563 Compiler for C supports arguments -Wcast-qual: YES 00:01:22.563 Compiler for C supports arguments -Wdeprecated: YES 00:01:22.563 Compiler for C supports arguments -Wformat: YES 00:01:22.563 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:22.563 Compiler for C supports arguments -Wformat-security: NO 00:01:22.563 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:22.563 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:22.563 Compiler for C supports arguments -Wnested-externs: YES 00:01:22.563 Compiler for C supports arguments -Wold-style-definition: YES 00:01:22.563 Compiler for C supports arguments -Wpointer-arith: YES 00:01:22.563 Compiler for C supports arguments -Wsign-compare: YES 00:01:22.563 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:22.563 Compiler for C supports arguments -Wundef: YES 00:01:22.563 Compiler for C supports arguments -Wwrite-strings: YES 00:01:22.563 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:22.563 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:22.563 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:22.563 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:22.563 Compiler for C supports arguments -mavx512f: YES 00:01:22.563 Checking if "AVX512 checking" compiles: YES 00:01:22.563 Fetching value of define "__SSE4_2__" : 1 00:01:22.563 Fetching value of define "__AES__" : 1 00:01:22.563 Fetching value of define "__AVX__" : 1 00:01:22.563 Fetching value of define "__AVX2__" : 1 00:01:22.563 Fetching value of define "__AVX512BW__" : 1 00:01:22.563 Fetching value of define "__AVX512CD__" : 1 00:01:22.563 Fetching value of define "__AVX512DQ__" : 1 00:01:22.563 Fetching value of define "__AVX512F__" : 1 00:01:22.563 Fetching value of define "__AVX512VL__" : 1 00:01:22.563 Fetching value of define "__PCLMUL__" : 1 00:01:22.563 Fetching value of define "__RDRND__" : 1 00:01:22.563 Fetching value of define "__RDSEED__" : 1 00:01:22.563 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:22.563 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:22.563 Message: lib/kvargs: Defining dependency "kvargs" 00:01:22.563 Message: lib/telemetry: Defining dependency "telemetry" 00:01:22.563 Checking for function "getentropy" : YES 00:01:22.563 Message: lib/eal: Defining dependency "eal" 00:01:22.563 Message: lib/ring: Defining dependency "ring" 00:01:22.563 Message: lib/rcu: Defining dependency "rcu" 00:01:22.563 Message: lib/mempool: Defining dependency "mempool" 00:01:22.563 Message: lib/mbuf: Defining dependency "mbuf" 00:01:22.563 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:22.563 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:22.563 Compiler for C supports arguments -mpclmul: YES 00:01:22.563 Compiler for C supports arguments -maes: YES 00:01:22.563 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:22.563 Compiler for C supports arguments -mavx512bw: YES 00:01:22.563 Compiler for C supports arguments -mavx512dq: YES 00:01:22.563 Compiler for C supports arguments -mavx512vl: YES 00:01:22.563 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:22.563 Compiler for C supports arguments -mavx2: YES 00:01:22.563 Compiler for C supports arguments -mavx: YES 00:01:22.563 Message: lib/net: Defining dependency "net" 00:01:22.563 Message: lib/meter: Defining dependency "meter" 00:01:22.563 Message: lib/ethdev: Defining dependency "ethdev" 00:01:22.563 Message: lib/pci: Defining dependency "pci" 00:01:22.563 Message: lib/cmdline: Defining dependency "cmdline" 00:01:22.563 Message: lib/metrics: Defining dependency "metrics" 00:01:22.563 Message: lib/hash: Defining dependency "hash" 00:01:22.563 Message: lib/timer: Defining dependency "timer" 00:01:22.563 Fetching value of define "__AVX2__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:22.563 Message: lib/acl: Defining dependency "acl" 00:01:22.563 Message: lib/bbdev: Defining dependency "bbdev" 00:01:22.563 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:22.563 Run-time dependency libelf found: YES 0.190 00:01:22.563 Message: lib/bpf: Defining dependency "bpf" 00:01:22.563 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:22.563 Message: lib/compressdev: Defining dependency "compressdev" 00:01:22.563 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:22.563 Message: lib/distributor: Defining dependency "distributor" 00:01:22.563 Message: lib/efd: Defining dependency "efd" 00:01:22.563 Message: lib/eventdev: Defining dependency "eventdev" 00:01:22.563 Message: lib/gpudev: Defining dependency "gpudev" 00:01:22.563 Message: lib/gro: Defining dependency "gro" 00:01:22.563 Message: lib/gso: Defining dependency "gso" 00:01:22.563 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:22.563 Message: lib/jobstats: Defining dependency "jobstats" 00:01:22.563 Message: lib/latencystats: Defining dependency "latencystats" 00:01:22.563 Message: lib/lpm: Defining dependency "lpm" 00:01:22.563 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:22.563 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:22.563 Message: lib/member: Defining dependency "member" 00:01:22.563 Message: lib/pcapng: Defining dependency "pcapng" 00:01:22.563 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:22.563 Message: lib/power: Defining dependency "power" 00:01:22.563 Message: lib/rawdev: Defining dependency "rawdev" 00:01:22.563 Message: lib/regexdev: Defining dependency "regexdev" 00:01:22.563 Message: lib/dmadev: Defining dependency "dmadev" 00:01:22.563 Message: lib/rib: Defining dependency "rib" 00:01:22.563 Message: lib/reorder: Defining dependency "reorder" 00:01:22.563 Message: lib/sched: Defining dependency "sched" 00:01:22.563 Message: lib/security: Defining dependency "security" 00:01:22.563 Message: lib/stack: Defining dependency "stack" 00:01:22.563 Has header "linux/userfaultfd.h" : YES 00:01:22.563 Message: lib/vhost: Defining dependency "vhost" 00:01:22.563 Message: lib/ipsec: Defining dependency "ipsec" 00:01:22.563 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:22.563 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:22.563 Message: lib/fib: Defining dependency "fib" 00:01:22.563 Message: lib/port: Defining dependency "port" 00:01:22.563 Message: lib/pdump: Defining dependency "pdump" 00:01:22.563 Message: lib/table: Defining dependency "table" 00:01:22.563 Message: lib/pipeline: Defining dependency "pipeline" 00:01:22.563 Message: lib/graph: Defining dependency "graph" 00:01:22.563 Message: lib/node: Defining dependency "node" 00:01:22.563 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:22.563 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:22.563 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:22.563 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:22.563 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:22.563 Compiler for C supports arguments -Wno-unused-value: YES 00:01:22.563 Compiler for C supports arguments -Wno-format: YES 00:01:22.563 Compiler for C supports arguments -Wno-format-security: YES 00:01:22.563 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:22.835 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:22.835 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:22.835 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:22.835 Fetching value of define "__AVX2__" : 1 (cached) 00:01:22.835 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:22.835 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:22.835 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:22.835 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:22.835 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:22.835 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:22.835 Program doxygen found: YES (/usr/bin/doxygen) 00:01:22.835 Configuring doxy-api.conf using configuration 00:01:22.835 Program sphinx-build found: NO 00:01:22.835 Configuring rte_build_config.h using configuration 00:01:22.835 Message: 00:01:22.835 ================= 00:01:22.835 Applications Enabled 00:01:22.835 ================= 00:01:22.835 00:01:22.835 apps: 00:01:22.835 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:22.835 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:22.835 test-security-perf, 00:01:22.835 00:01:22.835 Message: 00:01:22.835 ================= 00:01:22.835 Libraries Enabled 00:01:22.835 ================= 00:01:22.835 00:01:22.835 libs: 00:01:22.835 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:22.835 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:22.835 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:22.835 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:22.835 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:22.835 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:22.835 table, pipeline, graph, node, 00:01:22.835 00:01:22.835 Message: 00:01:22.835 =============== 00:01:22.835 Drivers Enabled 00:01:22.835 =============== 00:01:22.835 00:01:22.835 common: 00:01:22.835 00:01:22.835 bus: 00:01:22.835 pci, vdev, 00:01:22.835 mempool: 00:01:22.835 ring, 00:01:22.835 dma: 00:01:22.835 00:01:22.835 net: 00:01:22.835 i40e, 00:01:22.835 raw: 00:01:22.835 00:01:22.835 crypto: 00:01:22.835 00:01:22.835 compress: 00:01:22.835 00:01:22.835 regex: 00:01:22.835 00:01:22.835 vdpa: 00:01:22.835 00:01:22.835 event: 00:01:22.835 00:01:22.835 baseband: 00:01:22.835 00:01:22.835 gpu: 00:01:22.835 00:01:22.835 00:01:22.835 Message: 00:01:22.835 ================= 00:01:22.835 Content Skipped 00:01:22.835 ================= 00:01:22.835 00:01:22.835 apps: 00:01:22.835 00:01:22.835 libs: 00:01:22.835 kni: explicitly disabled via build config (deprecated lib) 00:01:22.835 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:22.835 00:01:22.835 drivers: 00:01:22.835 common/cpt: not in enabled drivers build config 00:01:22.835 common/dpaax: not in enabled drivers build config 00:01:22.835 common/iavf: not in enabled drivers build config 00:01:22.835 common/idpf: not in enabled drivers build config 00:01:22.835 common/mvep: not in enabled drivers build config 00:01:22.835 common/octeontx: not in enabled drivers build config 00:01:22.835 bus/auxiliary: not in enabled drivers build config 00:01:22.835 bus/dpaa: not in enabled drivers build config 00:01:22.835 bus/fslmc: not in enabled drivers build config 00:01:22.835 bus/ifpga: not in enabled drivers build config 00:01:22.835 bus/vmbus: not in enabled drivers build config 00:01:22.835 common/cnxk: not in enabled drivers build config 00:01:22.835 common/mlx5: not in enabled drivers build config 00:01:22.835 common/qat: not in enabled drivers build config 00:01:22.835 common/sfc_efx: not in enabled drivers build config 00:01:22.835 mempool/bucket: not in enabled drivers build config 00:01:22.835 mempool/cnxk: not in enabled drivers build config 00:01:22.835 mempool/dpaa: not in enabled drivers build config 00:01:22.835 mempool/dpaa2: not in enabled drivers build config 00:01:22.835 mempool/octeontx: not in enabled drivers build config 00:01:22.835 mempool/stack: not in enabled drivers build config 00:01:22.835 dma/cnxk: not in enabled drivers build config 00:01:22.835 dma/dpaa: not in enabled drivers build config 00:01:22.835 dma/dpaa2: not in enabled drivers build config 00:01:22.835 dma/hisilicon: not in enabled drivers build config 00:01:22.835 dma/idxd: not in enabled drivers build config 00:01:22.835 dma/ioat: not in enabled drivers build config 00:01:22.835 dma/skeleton: not in enabled drivers build config 00:01:22.835 net/af_packet: not in enabled drivers build config 00:01:22.835 net/af_xdp: not in enabled drivers build config 00:01:22.835 net/ark: not in enabled drivers build config 00:01:22.835 net/atlantic: not in enabled drivers build config 00:01:22.835 net/avp: not in enabled drivers build config 00:01:22.835 net/axgbe: not in enabled drivers build config 00:01:22.835 net/bnx2x: not in enabled drivers build config 00:01:22.835 net/bnxt: not in enabled drivers build config 00:01:22.835 net/bonding: not in enabled drivers build config 00:01:22.835 net/cnxk: not in enabled drivers build config 00:01:22.835 net/cxgbe: not in enabled drivers build config 00:01:22.835 net/dpaa: not in enabled drivers build config 00:01:22.835 net/dpaa2: not in enabled drivers build config 00:01:22.835 net/e1000: not in enabled drivers build config 00:01:22.835 net/ena: not in enabled drivers build config 00:01:22.835 net/enetc: not in enabled drivers build config 00:01:22.835 net/enetfec: not in enabled drivers build config 00:01:22.835 net/enic: not in enabled drivers build config 00:01:22.835 net/failsafe: not in enabled drivers build config 00:01:22.835 net/fm10k: not in enabled drivers build config 00:01:22.835 net/gve: not in enabled drivers build config 00:01:22.835 net/hinic: not in enabled drivers build config 00:01:22.835 net/hns3: not in enabled drivers build config 00:01:22.835 net/iavf: not in enabled drivers build config 00:01:22.835 net/ice: not in enabled drivers build config 00:01:22.836 net/idpf: not in enabled drivers build config 00:01:22.836 net/igc: not in enabled drivers build config 00:01:22.836 net/ionic: not in enabled drivers build config 00:01:22.836 net/ipn3ke: not in enabled drivers build config 00:01:22.836 net/ixgbe: not in enabled drivers build config 00:01:22.836 net/kni: not in enabled drivers build config 00:01:22.836 net/liquidio: not in enabled drivers build config 00:01:22.836 net/mana: not in enabled drivers build config 00:01:22.836 net/memif: not in enabled drivers build config 00:01:22.836 net/mlx4: not in enabled drivers build config 00:01:22.836 net/mlx5: not in enabled drivers build config 00:01:22.836 net/mvneta: not in enabled drivers build config 00:01:22.836 net/mvpp2: not in enabled drivers build config 00:01:22.836 net/netvsc: not in enabled drivers build config 00:01:22.836 net/nfb: not in enabled drivers build config 00:01:22.836 net/nfp: not in enabled drivers build config 00:01:22.836 net/ngbe: not in enabled drivers build config 00:01:22.836 net/null: not in enabled drivers build config 00:01:22.836 net/octeontx: not in enabled drivers build config 00:01:22.836 net/octeon_ep: not in enabled drivers build config 00:01:22.836 net/pcap: not in enabled drivers build config 00:01:22.836 net/pfe: not in enabled drivers build config 00:01:22.836 net/qede: not in enabled drivers build config 00:01:22.836 net/ring: not in enabled drivers build config 00:01:22.836 net/sfc: not in enabled drivers build config 00:01:22.836 net/softnic: not in enabled drivers build config 00:01:22.836 net/tap: not in enabled drivers build config 00:01:22.836 net/thunderx: not in enabled drivers build config 00:01:22.836 net/txgbe: not in enabled drivers build config 00:01:22.836 net/vdev_netvsc: not in enabled drivers build config 00:01:22.836 net/vhost: not in enabled drivers build config 00:01:22.836 net/virtio: not in enabled drivers build config 00:01:22.836 net/vmxnet3: not in enabled drivers build config 00:01:22.836 raw/cnxk_bphy: not in enabled drivers build config 00:01:22.836 raw/cnxk_gpio: not in enabled drivers build config 00:01:22.836 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:22.836 raw/ifpga: not in enabled drivers build config 00:01:22.836 raw/ntb: not in enabled drivers build config 00:01:22.836 raw/skeleton: not in enabled drivers build config 00:01:22.836 crypto/armv8: not in enabled drivers build config 00:01:22.836 crypto/bcmfs: not in enabled drivers build config 00:01:22.836 crypto/caam_jr: not in enabled drivers build config 00:01:22.836 crypto/ccp: not in enabled drivers build config 00:01:22.836 crypto/cnxk: not in enabled drivers build config 00:01:22.836 crypto/dpaa_sec: not in enabled drivers build config 00:01:22.836 crypto/dpaa2_sec: not in enabled drivers build config 00:01:22.836 crypto/ipsec_mb: not in enabled drivers build config 00:01:22.836 crypto/mlx5: not in enabled drivers build config 00:01:22.836 crypto/mvsam: not in enabled drivers build config 00:01:22.836 crypto/nitrox: not in enabled drivers build config 00:01:22.836 crypto/null: not in enabled drivers build config 00:01:22.836 crypto/octeontx: not in enabled drivers build config 00:01:22.836 crypto/openssl: not in enabled drivers build config 00:01:22.836 crypto/scheduler: not in enabled drivers build config 00:01:22.836 crypto/uadk: not in enabled drivers build config 00:01:22.836 crypto/virtio: not in enabled drivers build config 00:01:22.836 compress/isal: not in enabled drivers build config 00:01:22.836 compress/mlx5: not in enabled drivers build config 00:01:22.836 compress/octeontx: not in enabled drivers build config 00:01:22.836 compress/zlib: not in enabled drivers build config 00:01:22.836 regex/mlx5: not in enabled drivers build config 00:01:22.836 regex/cn9k: not in enabled drivers build config 00:01:22.836 vdpa/ifc: not in enabled drivers build config 00:01:22.836 vdpa/mlx5: not in enabled drivers build config 00:01:22.836 vdpa/sfc: not in enabled drivers build config 00:01:22.836 event/cnxk: not in enabled drivers build config 00:01:22.836 event/dlb2: not in enabled drivers build config 00:01:22.836 event/dpaa: not in enabled drivers build config 00:01:22.836 event/dpaa2: not in enabled drivers build config 00:01:22.836 event/dsw: not in enabled drivers build config 00:01:22.836 event/opdl: not in enabled drivers build config 00:01:22.836 event/skeleton: not in enabled drivers build config 00:01:22.836 event/sw: not in enabled drivers build config 00:01:22.836 event/octeontx: not in enabled drivers build config 00:01:22.836 baseband/acc: not in enabled drivers build config 00:01:22.836 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:22.836 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:22.836 baseband/la12xx: not in enabled drivers build config 00:01:22.836 baseband/null: not in enabled drivers build config 00:01:22.836 baseband/turbo_sw: not in enabled drivers build config 00:01:22.836 gpu/cuda: not in enabled drivers build config 00:01:22.836 00:01:22.836 00:01:22.836 Build targets in project: 311 00:01:22.836 00:01:22.836 DPDK 22.11.4 00:01:22.836 00:01:22.836 User defined options 00:01:22.836 libdir : lib 00:01:22.836 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:22.836 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:22.836 c_link_args : 00:01:22.836 enable_docs : false 00:01:22.836 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:22.836 enable_kmods : false 00:01:22.836 machine : native 00:01:22.836 tests : false 00:01:22.836 00:01:22.836 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:22.836 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:22.836 22:00:17 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:01:22.836 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:22.836 [1/740] Generating lib/rte_kvargs_def with a custom command 00:01:22.836 [2/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:22.836 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:22.836 [4/740] Generating lib/rte_telemetry_def with a custom command 00:01:23.096 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:23.096 [6/740] Generating lib/rte_rcu_def with a custom command 00:01:23.096 [7/740] Generating lib/rte_eal_def with a custom command 00:01:23.096 [8/740] Generating lib/rte_eal_mingw with a custom command 00:01:23.096 [9/740] Generating lib/rte_mempool_def with a custom command 00:01:23.096 [10/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:23.096 [11/740] Generating lib/rte_ring_def with a custom command 00:01:23.096 [12/740] Generating lib/rte_mbuf_def with a custom command 00:01:23.096 [13/740] Generating lib/rte_mempool_mingw with a custom command 00:01:23.096 [14/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:23.096 [15/740] Generating lib/rte_rcu_mingw with a custom command 00:01:23.096 [16/740] Generating lib/rte_ring_mingw with a custom command 00:01:23.096 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:23.096 [18/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:23.096 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:23.096 [20/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:23.096 [21/740] Generating lib/rte_meter_def with a custom command 00:01:23.096 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:23.096 [23/740] Generating lib/rte_net_def with a custom command 00:01:23.096 [24/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:23.096 [25/740] Generating lib/rte_meter_mingw with a custom command 00:01:23.096 [26/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:23.096 [27/740] Generating lib/rte_net_mingw with a custom command 00:01:23.096 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:23.096 [29/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:23.096 [30/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:23.096 [31/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:23.096 [32/740] Linking static target lib/librte_kvargs.a 00:01:23.096 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:23.096 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:23.096 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:23.096 [36/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:23.096 [37/740] Generating lib/rte_ethdev_def with a custom command 00:01:23.096 [38/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:23.096 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:23.096 [40/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:23.096 [41/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:23.096 [42/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:23.096 [43/740] Generating lib/rte_pci_def with a custom command 00:01:23.096 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:23.096 [45/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:23.096 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:23.096 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:23.096 [48/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:23.096 [49/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:23.096 [50/740] Generating lib/rte_pci_mingw with a custom command 00:01:23.096 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:23.096 [52/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:23.096 [53/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:23.096 [54/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:23.096 [55/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:23.096 [56/740] Generating lib/rte_cmdline_def with a custom command 00:01:23.358 [57/740] Generating lib/rte_metrics_def with a custom command 00:01:23.358 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:23.358 [59/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:23.358 [60/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:23.358 [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:23.358 [62/740] Generating lib/rte_metrics_mingw with a custom command 00:01:23.358 [63/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:23.358 [64/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:23.358 [65/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:23.358 [66/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:23.358 [67/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:23.358 [68/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:23.358 [69/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:23.358 [70/740] Generating lib/rte_hash_mingw with a custom command 00:01:23.358 [71/740] Generating lib/rte_hash_def with a custom command 00:01:23.358 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:23.358 [73/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:23.358 [74/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:23.358 [75/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:23.358 [76/740] Generating lib/rte_timer_mingw with a custom command 00:01:23.358 [77/740] Generating lib/rte_timer_def with a custom command 00:01:23.358 [78/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:23.358 [79/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:23.358 [80/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:23.358 [81/740] Generating lib/rte_acl_mingw with a custom command 00:01:23.358 [82/740] Generating lib/rte_acl_def with a custom command 00:01:23.358 [83/740] Generating lib/rte_bbdev_def with a custom command 00:01:23.358 [84/740] Linking static target lib/librte_pci.a 00:01:23.358 [85/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:23.358 [86/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:23.358 [87/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:23.358 [88/740] Generating lib/rte_bitratestats_def with a custom command 00:01:23.358 [89/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:23.358 [90/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:23.358 [91/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:23.358 [92/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:23.358 [93/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:23.358 [94/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:23.358 [95/740] Linking static target lib/librte_ring.a 00:01:23.358 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:23.358 [97/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:23.358 [98/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:23.358 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:23.358 [100/740] Linking static target lib/librte_meter.a 00:01:23.358 [101/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:23.358 [102/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:23.358 [103/740] Generating lib/rte_bpf_def with a custom command 00:01:23.358 [104/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:23.358 [105/740] Generating lib/rte_cfgfile_def with a custom command 00:01:23.358 [106/740] Generating lib/rte_bpf_mingw with a custom command 00:01:23.358 [107/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:23.358 [108/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:23.359 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:23.359 [110/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:23.359 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:23.359 [112/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:23.359 [113/740] Generating lib/rte_compressdev_def with a custom command 00:01:23.359 [114/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:23.359 [115/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:23.359 [116/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:23.359 [117/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:23.359 [118/740] Generating lib/rte_cryptodev_def with a custom command 00:01:23.359 [119/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:23.359 [120/740] Generating lib/rte_distributor_def with a custom command 00:01:23.359 [121/740] Generating lib/rte_distributor_mingw with a custom command 00:01:23.359 [122/740] Generating lib/rte_efd_def with a custom command 00:01:23.359 [123/740] Generating lib/rte_efd_mingw with a custom command 00:01:23.621 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:23.621 [125/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.621 [126/740] Generating lib/rte_eventdev_def with a custom command 00:01:23.621 [127/740] Generating lib/rte_eventdev_mingw with a custom command 00:01:23.621 [128/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:23.621 [129/740] Linking target lib/librte_kvargs.so.23.0 00:01:23.621 [130/740] Generating lib/rte_gpudev_def with a custom command 00:01:23.621 [131/740] Generating lib/rte_gpudev_mingw with a custom command 00:01:23.621 [132/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:23.621 [133/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.621 [134/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:23.621 [135/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:23.621 [136/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:23.621 [137/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:23.621 [138/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.621 [139/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:23.621 [140/740] Generating lib/rte_gro_def with a custom command 00:01:23.621 [141/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:23.622 [142/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:23.622 [143/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:23.622 [144/740] Generating lib/rte_gro_mingw with a custom command 00:01:23.622 [145/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:23.622 [146/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:23.622 [147/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:23.622 [148/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:23.622 [149/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.622 [150/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:23.883 [151/740] Linking static target lib/librte_cfgfile.a 00:01:23.883 [152/740] Generating lib/rte_gso_def with a custom command 00:01:23.883 [153/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:23.883 [154/740] Generating lib/rte_gso_mingw with a custom command 00:01:23.883 [155/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:23.883 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:23.883 [157/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:23.883 [158/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:23.883 [159/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:23.883 [160/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:23.883 [161/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:23.884 [162/740] Generating lib/rte_ip_frag_mingw with a custom command 00:01:23.884 [163/740] Generating lib/rte_ip_frag_def with a custom command 00:01:23.884 [164/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:23.884 [165/740] Linking static target lib/librte_cmdline.a 00:01:23.884 [166/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:23.884 [167/740] Generating lib/rte_jobstats_def with a custom command 00:01:23.884 [168/740] Generating lib/rte_jobstats_mingw with a custom command 00:01:23.884 [169/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:23.884 [170/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:23.884 [171/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:23.884 [172/740] Generating lib/rte_latencystats_def with a custom command 00:01:23.884 [173/740] Generating lib/rte_latencystats_mingw with a custom command 00:01:23.884 [174/740] Linking static target lib/librte_telemetry.a 00:01:23.884 [175/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:23.884 [176/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:23.884 [177/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:23.884 [178/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:23.884 [179/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:23.884 [180/740] Generating lib/rte_lpm_mingw with a custom command 00:01:23.884 [181/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:23.884 [182/740] Generating lib/rte_lpm_def with a custom command 00:01:23.884 [183/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:23.884 [184/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:23.884 [185/740] Linking static target lib/librte_metrics.a 00:01:23.884 [186/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:23.884 [187/740] Linking static target lib/librte_net.a 00:01:23.884 [188/740] Generating lib/rte_member_mingw with a custom command 00:01:23.884 [189/740] Generating lib/rte_member_def with a custom command 00:01:23.884 [190/740] Linking static target lib/librte_timer.a 00:01:23.884 [191/740] Generating lib/rte_pcapng_mingw with a custom command 00:01:23.884 [192/740] Generating lib/rte_pcapng_def with a custom command 00:01:23.884 [193/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:23.884 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:23.884 [195/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:23.884 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:23.884 [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:23.884 [198/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:23.884 [199/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:23.884 [200/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:23.884 [201/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:23.884 [202/740] Linking static target lib/librte_bitratestats.a 00:01:23.884 [203/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:23.884 [204/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:23.884 [205/740] Linking static target lib/librte_jobstats.a 00:01:23.884 [206/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:23.884 [207/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:23.884 [208/740] Generating lib/rte_power_def with a custom command 00:01:23.884 [209/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:23.884 [210/740] Generating lib/rte_power_mingw with a custom command 00:01:23.884 [211/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:24.147 [212/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:24.147 [213/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:24.147 [214/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:24.147 [215/740] Generating lib/rte_rawdev_def with a custom command 00:01:24.147 [216/740] Generating lib/rte_rawdev_mingw with a custom command 00:01:24.147 [217/740] Generating lib/rte_regexdev_def with a custom command 00:01:24.147 [218/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:24.147 [219/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:24.147 [220/740] Generating lib/rte_regexdev_mingw with a custom command 00:01:24.147 [221/740] Generating lib/rte_dmadev_def with a custom command 00:01:24.147 [222/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:24.147 [223/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:24.147 [224/740] Generating lib/rte_dmadev_mingw with a custom command 00:01:24.147 [225/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:24.147 [226/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:24.147 [227/740] Generating lib/rte_rib_def with a custom command 00:01:24.147 [228/740] Generating lib/rte_rib_mingw with a custom command 00:01:24.147 [229/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:24.147 [230/740] Generating lib/rte_reorder_def with a custom command 00:01:24.147 [231/740] Generating lib/rte_reorder_mingw with a custom command 00:01:24.147 [232/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:24.147 [233/740] Generating lib/rte_sched_def with a custom command 00:01:24.147 [234/740] Generating lib/rte_sched_mingw with a custom command 00:01:24.147 [235/740] Generating lib/rte_security_def with a custom command 00:01:24.147 [236/740] Generating lib/rte_security_mingw with a custom command 00:01:24.147 [237/740] Generating lib/rte_stack_def with a custom command 00:01:24.147 [238/740] Generating lib/rte_stack_mingw with a custom command 00:01:24.147 [239/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:24.147 [240/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:24.147 [241/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:24.147 [242/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:24.147 [243/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:24.147 [244/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.147 [245/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:24.147 [246/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:24.147 [247/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:24.147 [248/740] Generating lib/rte_vhost_def with a custom command 00:01:24.147 [249/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:24.147 [250/740] Generating lib/rte_vhost_mingw with a custom command 00:01:24.147 [251/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.147 [252/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:24.147 [253/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:24.147 [254/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:24.147 [255/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:24.147 [256/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:24.148 [257/740] Linking static target lib/librte_compressdev.a 00:01:24.148 [258/740] Linking static target lib/librte_stack.a 00:01:24.414 [259/740] Generating lib/rte_ipsec_def with a custom command 00:01:24.414 [260/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:24.414 [261/740] Generating lib/rte_ipsec_mingw with a custom command 00:01:24.414 [262/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.414 [263/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:24.414 [264/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:24.414 [265/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:24.414 [266/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:24.414 [267/740] Linking static target lib/librte_mempool.a 00:01:24.415 [268/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:24.415 [269/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:24.415 [270/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:24.415 [271/740] Generating lib/rte_fib_mingw with a custom command 00:01:24.415 [272/740] Generating lib/rte_fib_def with a custom command 00:01:24.415 [273/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.415 [274/740] Linking static target lib/librte_rcu.a 00:01:24.415 [275/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:24.415 [276/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.415 [277/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:24.415 [278/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:24.415 [279/740] Linking target lib/librte_telemetry.so.23.0 00:01:24.415 [280/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:24.415 [281/740] Linking static target lib/librte_bbdev.a 00:01:24.415 [282/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.415 [283/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.415 [284/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:24.415 [285/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:24.415 [286/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:24.415 [287/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:24.415 [288/740] Generating lib/rte_port_def with a custom command 00:01:24.415 [289/740] Generating lib/rte_port_mingw with a custom command 00:01:24.415 [290/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:24.415 [291/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:24.415 [292/740] Linking static target lib/librte_rawdev.a 00:01:24.415 [293/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:24.415 [294/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:24.415 [295/740] Generating lib/rte_pdump_mingw with a custom command 00:01:24.415 [296/740] Generating lib/rte_pdump_def with a custom command 00:01:24.679 [297/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.679 [298/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:24.679 [299/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:24.679 [300/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:24.679 [301/740] Linking static target lib/librte_gpudev.a 00:01:24.679 [302/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:24.679 [303/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:24.679 [304/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:24.680 [305/740] Linking static target lib/librte_gro.a 00:01:24.680 [306/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:24.680 [307/740] Linking static target lib/librte_dmadev.a 00:01:24.680 [308/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:24.680 [309/740] Linking static target lib/librte_distributor.a 00:01:24.680 [310/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:24.680 [311/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:24.680 [312/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:24.680 [313/740] Linking static target lib/librte_latencystats.a 00:01:24.680 [314/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:24.680 [315/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:24.680 [316/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:24.680 [317/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:24.680 [318/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:24.680 [319/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:24.680 [320/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:24.680 [321/740] Generating lib/rte_table_def with a custom command 00:01:24.680 [322/740] Linking static target lib/librte_eal.a 00:01:24.680 [323/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:24.680 [324/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:24.680 [325/740] Linking static target lib/librte_gso.a 00:01:24.680 [326/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.680 [327/740] Generating lib/rte_table_mingw with a custom command 00:01:24.680 [328/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:24.680 [329/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:24.942 [330/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:24.942 [331/740] Linking static target lib/librte_regexdev.a 00:01:24.942 [332/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:24.942 [333/740] Generating lib/rte_pipeline_def with a custom command 00:01:24.942 [334/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:24.942 [335/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:24.942 [336/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:24.942 [337/740] Generating lib/rte_pipeline_mingw with a custom command 00:01:24.942 [338/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:24.942 [339/740] Linking static target lib/librte_ip_frag.a 00:01:24.942 [340/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:24.942 [341/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.942 [342/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:24.942 [343/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:24.942 [344/740] Generating lib/rte_graph_def with a custom command 00:01:24.942 [345/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:24.942 [346/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:24.942 [347/740] Linking static target lib/librte_power.a 00:01:24.942 [348/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:24.942 [349/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.942 [350/740] Generating lib/rte_graph_mingw with a custom command 00:01:24.942 [351/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:24.942 [352/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:24.942 [353/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:24.942 [354/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:24.942 [355/740] Linking static target lib/librte_mbuf.a 00:01:24.942 [356/740] Linking static target lib/librte_reorder.a 00:01:24.942 [357/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.942 [358/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:24.942 [359/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.942 [360/740] Linking static target lib/librte_security.a 00:01:24.942 [361/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:25.203 [362/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:25.203 [363/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:25.203 [364/740] Linking static target lib/librte_pcapng.a 00:01:25.203 [365/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:01:25.203 [366/740] Linking static target lib/librte_bpf.a 00:01:25.203 [367/740] Generating lib/rte_node_def with a custom command 00:01:25.203 [368/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:25.203 [369/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:01:25.203 [370/740] Generating lib/rte_node_mingw with a custom command 00:01:25.203 [371/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.203 [372/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.203 [373/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:25.203 [374/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:25.203 [375/740] Generating drivers/rte_bus_pci_def with a custom command 00:01:25.203 [376/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.203 [377/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:25.203 [378/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:25.203 [379/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:25.203 [380/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:25.203 [381/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:25.203 [382/740] Generating drivers/rte_bus_vdev_def with a custom command 00:01:25.203 [383/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:25.203 [384/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:25.203 [385/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:25.203 [386/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:25.203 [387/740] Generating drivers/rte_mempool_ring_def with a custom command 00:01:25.203 [388/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:25.467 [389/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.467 [390/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:25.467 [391/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:25.467 [392/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:25.467 [393/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:25.467 [394/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.467 [395/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:25.467 [396/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:25.467 [397/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.467 [398/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:25.468 [399/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:25.468 [400/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:25.468 [401/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:25.468 [402/740] Linking static target lib/librte_rib.a 00:01:25.468 [403/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.468 [404/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:25.468 [405/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:25.468 [406/740] Linking static target lib/librte_lpm.a 00:01:25.468 [407/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:25.468 [408/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:25.468 [409/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:25.468 [410/740] Generating drivers/rte_net_i40e_def with a custom command 00:01:25.468 [411/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:25.468 [412/740] Linking static target lib/librte_efd.a 00:01:25.468 [413/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:25.468 [414/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:25.468 [415/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:25.468 [416/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.468 [417/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:25.468 [418/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:25.468 [419/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.468 [420/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.468 [421/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:25.468 [422/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:25.468 [423/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:25.731 [424/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:25.731 [425/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:25.731 [426/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:25.731 [427/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:25.731 [428/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:25.731 [429/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:25.731 [430/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:25.731 [431/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:25.731 [432/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:25.731 [433/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:25.731 [434/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:25.731 [435/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:25.731 [436/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.731 [437/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:25.731 [438/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:25.731 [439/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:25.731 [440/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:25.731 [441/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:25.731 [442/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:25.731 [443/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:25.731 [444/740] Linking static target lib/librte_graph.a 00:01:25.731 [445/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:25.731 [446/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.731 [447/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.731 [448/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.731 [449/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:25.731 [450/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:25.994 [451/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:25.994 [452/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:25.994 [453/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:25.994 [454/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:25.994 [455/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:25.994 [456/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.994 [457/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.994 [458/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:25.994 [459/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:25.994 [460/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.994 [461/740] Linking static target drivers/librte_bus_vdev.a 00:01:25.994 [462/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.994 [463/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:25.994 [464/740] Linking static target lib/librte_fib.a 00:01:25.994 [465/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:25.994 [466/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:25.994 [467/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:25.994 [468/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.256 [469/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:26.256 [470/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:26.256 [471/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:26.256 [472/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:26.256 [473/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:26.256 [474/740] Linking static target lib/librte_pdump.a 00:01:26.256 [475/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:26.256 [476/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:26.256 [477/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.256 [478/740] Linking static target drivers/librte_bus_pci.a 00:01:26.256 [479/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:26.519 [480/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:26.519 [481/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.519 [482/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.519 [483/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:26.519 [484/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:26.519 [485/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:26.519 [486/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:26.519 [487/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:26.519 [488/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:26.519 [489/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.519 [490/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:26.519 [491/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:26.519 [492/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:26.519 [493/740] Linking static target lib/librte_table.a 00:01:26.519 [494/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.519 [495/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:26.519 [496/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:26.519 [497/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:26.780 [498/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:26.780 [499/740] Linking static target lib/librte_cryptodev.a 00:01:26.780 [500/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:26.780 [501/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:26.780 [502/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.780 [503/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:26.780 [504/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:26.780 [505/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:26.780 [506/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:26.780 [507/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:26.780 [508/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:26.780 [509/740] Linking static target lib/librte_sched.a 00:01:26.780 [510/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:26.780 [511/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:26.780 [512/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:26.780 [513/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:26.780 [514/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:26.780 [515/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:26.780 [516/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:26.780 [517/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:26.780 [518/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:26.780 [519/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:26.780 [520/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:26.780 [521/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:26.780 [522/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:26.780 [523/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:27.048 [524/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.048 [525/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:27.048 [526/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:27.048 [527/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:27.048 [528/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:27.048 [529/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:27.048 [530/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:27.048 [531/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:27.048 [532/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:27.048 [533/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:27.048 [534/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:27.048 [535/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:27.048 [536/740] Linking static target lib/librte_node.a 00:01:27.048 [537/740] Linking static target lib/librte_ipsec.a 00:01:27.048 [538/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:27.048 [539/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:27.048 [540/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.048 [541/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:27.048 [542/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:27.048 [543/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:27.048 [544/740] Linking static target lib/librte_ethdev.a 00:01:27.048 [545/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:27.048 [546/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:27.048 [547/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:27.312 [548/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:27.312 [549/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:27.312 [550/740] Linking static target lib/librte_member.a 00:01:27.312 [551/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:27.312 [552/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.312 [553/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.312 [554/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.312 [555/740] Linking static target drivers/librte_mempool_ring.a 00:01:27.312 [556/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:27.312 [557/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:27.312 [558/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.312 [559/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:27.312 [560/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.312 [561/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:27.312 [562/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:27.312 [563/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:27.312 [564/740] Linking static target lib/librte_port.a 00:01:27.312 [565/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:27.312 [566/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:27.312 [567/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:27.312 [568/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:27.312 [569/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.312 [570/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:27.312 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:27.312 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:27.312 [573/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:27.312 [574/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:27.312 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:27.571 [576/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:27.571 [577/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:27.571 [578/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:27.571 [579/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:27.571 [580/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:27.571 [581/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:27.571 [582/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:27.571 [583/740] Linking static target lib/librte_eventdev.a 00:01:27.571 [584/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:27.571 [585/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:27.571 [586/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:27.571 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:27.571 [588/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.571 [589/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:27.571 [590/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:27.571 [591/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:27.571 [592/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:27.571 [593/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:27.571 [594/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:27.571 [595/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:27.829 [596/740] Linking static target lib/librte_hash.a 00:01:27.829 [597/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:27.829 [598/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:27.829 [599/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:27.829 [600/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:27.829 [601/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:01:27.829 [602/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:27.829 [603/740] Linking static target lib/librte_acl.a 00:01:27.829 [604/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:27.829 [605/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:27.829 [606/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:28.088 [607/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:28.088 [608/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.088 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:28.088 [610/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:01:28.088 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:28.349 [612/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.349 [613/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:28.349 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:28.615 [615/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.873 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:28.873 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:28.873 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:29.442 [619/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.442 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:29.442 [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:29.442 [622/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:30.015 [623/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:30.015 [624/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:30.015 [625/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.276 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:30.276 [627/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:30.276 [628/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:30.276 [629/740] Linking static target drivers/librte_net_i40e.a 00:01:30.843 [630/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:31.102 [631/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:31.102 [632/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:31.361 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.900 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.279 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.279 [636/740] Linking target lib/librte_eal.so.23.0 00:01:35.536 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:35.536 [638/740] Linking target lib/librte_dmadev.so.23.0 00:01:35.536 [639/740] Linking target lib/librte_timer.so.23.0 00:01:35.536 [640/740] Linking target lib/librte_ring.so.23.0 00:01:35.536 [641/740] Linking target lib/librte_pci.so.23.0 00:01:35.536 [642/740] Linking target lib/librte_meter.so.23.0 00:01:35.536 [643/740] Linking target lib/librte_jobstats.so.23.0 00:01:35.536 [644/740] Linking target lib/librte_stack.so.23.0 00:01:35.536 [645/740] Linking target lib/librte_rawdev.so.23.0 00:01:35.536 [646/740] Linking target drivers/librte_bus_vdev.so.23.0 00:01:35.536 [647/740] Linking target lib/librte_cfgfile.so.23.0 00:01:35.536 [648/740] Linking target lib/librte_graph.so.23.0 00:01:35.536 [649/740] Linking target lib/librte_acl.so.23.0 00:01:35.536 [650/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:35.536 [651/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:35.536 [652/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:35.536 [653/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:35.536 [654/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:35.536 [655/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:35.536 [656/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:35.536 [657/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:35.536 [658/740] Linking target lib/librte_rcu.so.23.0 00:01:35.794 [659/740] Linking target lib/librte_mempool.so.23.0 00:01:35.794 [660/740] Linking target drivers/librte_bus_pci.so.23.0 00:01:35.794 [661/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:35.794 [662/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:35.794 [663/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:35.794 [664/740] Linking target drivers/librte_mempool_ring.so.23.0 00:01:35.794 [665/740] Linking target lib/librte_rib.so.23.0 00:01:35.794 [666/740] Linking target lib/librte_mbuf.so.23.0 00:01:36.054 [667/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:36.054 [668/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:36.054 [669/740] Linking target lib/librte_fib.so.23.0 00:01:36.054 [670/740] Linking target lib/librte_bbdev.so.23.0 00:01:36.054 [671/740] Linking target lib/librte_regexdev.so.23.0 00:01:36.054 [672/740] Linking target lib/librte_reorder.so.23.0 00:01:36.054 [673/740] Linking target lib/librte_compressdev.so.23.0 00:01:36.054 [674/740] Linking target lib/librte_distributor.so.23.0 00:01:36.054 [675/740] Linking target lib/librte_net.so.23.0 00:01:36.054 [676/740] Linking target lib/librte_cryptodev.so.23.0 00:01:36.054 [677/740] Linking target lib/librte_gpudev.so.23.0 00:01:36.054 [678/740] Linking target lib/librte_sched.so.23.0 00:01:36.054 [679/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:36.054 [680/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:36.054 [681/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:36.313 [682/740] Linking target lib/librte_hash.so.23.0 00:01:36.313 [683/740] Linking target lib/librte_security.so.23.0 00:01:36.313 [684/740] Linking target lib/librte_cmdline.so.23.0 00:01:36.313 [685/740] Linking target lib/librte_ethdev.so.23.0 00:01:36.313 [686/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:36.313 [687/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:36.313 [688/740] Linking target lib/librte_member.so.23.0 00:01:36.313 [689/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:36.313 [690/740] Linking target lib/librte_efd.so.23.0 00:01:36.313 [691/740] Linking target lib/librte_lpm.so.23.0 00:01:36.313 [692/740] Linking target lib/librte_ipsec.so.23.0 00:01:36.313 [693/740] Linking target lib/librte_metrics.so.23.0 00:01:36.313 [694/740] Linking target lib/librte_gro.so.23.0 00:01:36.313 [695/740] Linking target lib/librte_gso.so.23.0 00:01:36.313 [696/740] Linking target lib/librte_ip_frag.so.23.0 00:01:36.313 [697/740] Linking target lib/librte_eventdev.so.23.0 00:01:36.313 [698/740] Linking target lib/librte_pcapng.so.23.0 00:01:36.313 [699/740] Linking target lib/librte_bpf.so.23.0 00:01:36.313 [700/740] Linking target lib/librte_power.so.23.0 00:01:36.649 [701/740] Linking target drivers/librte_net_i40e.so.23.0 00:01:36.649 [702/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:36.649 [703/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:36.649 [704/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:36.649 [705/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:36.649 [706/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:36.649 [707/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:36.649 [708/740] Linking target lib/librte_node.so.23.0 00:01:36.649 [709/740] Linking target lib/librte_latencystats.so.23.0 00:01:36.649 [710/740] Linking target lib/librte_bitratestats.so.23.0 00:01:36.649 [711/740] Linking target lib/librte_pdump.so.23.0 00:01:36.649 [712/740] Linking target lib/librte_port.so.23.0 00:01:36.908 [713/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:36.908 [714/740] Linking target lib/librte_table.so.23.0 00:01:36.908 [715/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:37.168 [716/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:37.428 [717/740] Linking static target lib/librte_vhost.a 00:01:37.997 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:01:37.997 [719/740] Linking static target lib/librte_pipeline.a 00:01:38.256 [720/740] Linking target app/dpdk-dumpcap 00:01:38.256 [721/740] Linking target app/dpdk-test-acl 00:01:38.256 [722/740] Linking target app/dpdk-proc-info 00:01:38.256 [723/740] Linking target app/dpdk-test-compress-perf 00:01:38.256 [724/740] Linking target app/dpdk-test-crypto-perf 00:01:38.256 [725/740] Linking target app/dpdk-test-sad 00:01:38.256 [726/740] Linking target app/dpdk-test-fib 00:01:38.256 [727/740] Linking target app/dpdk-test-pipeline 00:01:38.256 [728/740] Linking target app/dpdk-test-bbdev 00:01:38.256 [729/740] Linking target app/dpdk-pdump 00:01:38.256 [730/740] Linking target app/dpdk-test-security-perf 00:01:38.256 [731/740] Linking target app/dpdk-test-cmdline 00:01:38.256 [732/740] Linking target app/dpdk-test-regex 00:01:38.256 [733/740] Linking target app/dpdk-test-gpudev 00:01:38.256 [734/740] Linking target app/dpdk-test-flow-perf 00:01:38.256 [735/740] Linking target app/dpdk-test-eventdev 00:01:38.523 [736/740] Linking target app/dpdk-testpmd 00:01:39.096 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.096 [738/740] Linking target lib/librte_vhost.so.23.0 00:01:42.397 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.397 [740/740] Linking target lib/librte_pipeline.so.23.0 00:01:42.397 22:00:37 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:01:42.397 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:42.397 [0/1] Installing files. 00:01:42.397 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:01:42.397 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:42.398 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:01:42.399 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.400 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:42.401 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.402 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:42.403 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:01:42.403 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.403 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:42.668 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:42.668 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:42.668 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:42.668 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:01:42.668 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.668 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.668 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.668 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.668 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.668 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.668 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.668 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.668 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.669 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.669 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.669 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.669 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.669 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.669 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.669 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.669 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.669 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.670 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.671 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:42.672 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:01:42.673 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:01:42.673 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:01:42.673 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:01:42.673 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:01:42.673 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:01:42.673 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:01:42.673 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:01:42.673 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:01:42.673 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:01:42.673 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:01:42.673 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:01:42.673 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:01:42.673 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:01:42.673 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:01:42.673 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:01:42.673 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:01:42.673 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:01:42.673 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:01:42.673 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:01:42.673 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:01:42.673 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:01:42.673 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:01:42.673 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:01:42.673 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:01:42.673 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:01:42.673 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:01:42.673 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:01:42.673 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:01:42.673 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:01:42.673 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:01:42.673 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:01:42.673 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:01:42.673 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:01:42.673 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:01:42.673 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:01:42.673 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:01:42.673 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:01:42.673 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:01:42.673 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:01:42.673 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:01:42.673 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:01:42.673 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:01:42.673 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:01:42.673 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:01:42.673 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:01:42.673 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:01:42.673 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:01:42.673 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:01:42.673 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:01:42.673 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:01:42.673 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:01:42.673 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:01:42.673 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:01:42.673 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:01:42.673 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:01:42.673 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:01:42.673 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:01:42.673 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:01:42.673 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:01:42.673 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:01:42.673 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:01:42.673 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:01:42.673 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:01:42.673 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:01:42.673 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:01:42.673 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:01:42.673 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:01:42.673 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:01:42.673 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:01:42.673 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:01:42.673 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:01:42.673 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:01:42.673 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:01:42.673 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:01:42.673 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:01:42.673 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:01:42.673 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:01:42.673 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:01:42.673 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:01:42.673 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:01:42.673 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:01:42.673 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:01:42.673 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:01:42.673 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:01:42.673 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:01:42.673 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:01:42.673 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:01:42.673 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:01:42.673 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:01:42.673 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:01:42.673 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:01:42.673 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:01:42.673 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:01:42.674 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:01:42.674 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:01:42.674 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:01:42.674 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:01:42.674 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:01:42.674 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:01:42.674 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:01:42.674 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:01:42.674 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:01:42.674 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:01:42.674 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:01:42.674 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:01:42.674 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:01:42.674 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:01:42.674 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:01:42.674 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:01:42.674 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:01:42.674 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:01:42.674 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:01:42.674 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:01:42.674 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:01:42.674 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:01:42.674 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:01:42.674 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:01:42.674 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:01:42.674 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:01:42.674 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:01:42.674 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:01:42.674 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:01:42.674 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:01:42.674 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:01:42.674 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:01:42.932 22:00:37 -- common/autobuild_common.sh@192 -- $ uname -s 00:01:42.932 22:00:37 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:01:42.932 22:00:37 -- common/autobuild_common.sh@203 -- $ cat 00:01:42.932 22:00:37 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.932 00:01:42.932 real 0m24.550s 00:01:42.932 user 6m53.328s 00:01:42.932 sys 1m44.191s 00:01:42.932 22:00:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:42.932 22:00:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.932 ************************************ 00:01:42.933 END TEST build_native_dpdk 00:01:42.933 ************************************ 00:01:42.933 22:00:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:42.933 22:00:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:42.933 22:00:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:42.933 22:00:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:42.933 22:00:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:42.933 22:00:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:42.933 22:00:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:42.933 22:00:37 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:01:42.933 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:01:43.191 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:01:43.191 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:01:43.191 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:43.450 Using 'verbs' RDMA provider 00:01:56.263 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:06.251 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:06.509 Creating mk/config.mk...done. 00:02:06.509 Creating mk/cc.flags.mk...done. 00:02:06.509 Type 'make' to build. 00:02:06.509 22:01:01 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:06.509 22:01:01 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:06.509 22:01:01 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:06.509 22:01:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.509 ************************************ 00:02:06.509 START TEST make 00:02:06.509 ************************************ 00:02:06.509 22:01:01 -- common/autotest_common.sh@1104 -- $ make -j96 00:02:06.767 make[1]: Nothing to be done for 'all'. 00:02:08.167 The Meson build system 00:02:08.167 Version: 1.3.1 00:02:08.167 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:08.167 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:08.167 Build type: native build 00:02:08.167 Project name: libvfio-user 00:02:08.167 Project version: 0.0.1 00:02:08.167 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:08.167 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:08.167 Host machine cpu family: x86_64 00:02:08.167 Host machine cpu: x86_64 00:02:08.167 Run-time dependency threads found: YES 00:02:08.167 Library dl found: YES 00:02:08.167 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:08.167 Run-time dependency json-c found: YES 0.17 00:02:08.167 Run-time dependency cmocka found: YES 1.1.7 00:02:08.167 Program pytest-3 found: NO 00:02:08.167 Program flake8 found: NO 00:02:08.167 Program misspell-fixer found: NO 00:02:08.167 Program restructuredtext-lint found: NO 00:02:08.167 Program valgrind found: YES (/usr/bin/valgrind) 00:02:08.167 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:08.167 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:08.167 Compiler for C supports arguments -Wwrite-strings: YES 00:02:08.167 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.167 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:08.167 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:08.167 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:08.167 Build targets in project: 8 00:02:08.167 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:08.167 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:08.167 00:02:08.167 libvfio-user 0.0.1 00:02:08.167 00:02:08.167 User defined options 00:02:08.167 buildtype : debug 00:02:08.167 default_library: shared 00:02:08.167 libdir : /usr/local/lib 00:02:08.167 00:02:08.167 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.745 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:08.745 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:08.745 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:08.745 [3/37] Compiling C object samples/null.p/null.c.o 00:02:08.745 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:08.745 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:08.745 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:08.745 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:08.745 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:08.745 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:08.745 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:08.745 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:08.745 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:08.745 [13/37] Compiling C object samples/server.p/server.c.o 00:02:08.745 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:08.745 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:08.745 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:08.745 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:08.745 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:08.745 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:08.745 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:08.745 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:08.745 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:08.745 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:08.745 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:08.745 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:08.745 [26/37] Compiling C object samples/client.p/client.c.o 00:02:08.745 [27/37] Linking target samples/client 00:02:09.006 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:09.006 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:09.006 [30/37] Linking target test/unit_tests 00:02:09.006 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:09.006 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:09.006 [33/37] Linking target samples/server 00:02:09.006 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:09.006 [35/37] Linking target samples/lspci 00:02:09.006 [36/37] Linking target samples/null 00:02:09.006 [37/37] Linking target samples/gpio-pci-idio-16 00:02:09.006 INFO: autodetecting backend as ninja 00:02:09.006 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.264 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.524 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:09.524 ninja: no work to do. 00:02:17.681 CC lib/ut_mock/mock.o 00:02:17.681 CC lib/log/log.o 00:02:17.681 CC lib/log/log_flags.o 00:02:17.681 CC lib/log/log_deprecated.o 00:02:17.681 CC lib/ut/ut.o 00:02:17.681 LIB libspdk_ut_mock.a 00:02:17.681 SO libspdk_ut_mock.so.5.0 00:02:17.681 LIB libspdk_log.a 00:02:17.681 LIB libspdk_ut.a 00:02:17.681 SO libspdk_log.so.6.1 00:02:17.681 SYMLINK libspdk_ut_mock.so 00:02:17.681 SO libspdk_ut.so.1.0 00:02:17.681 SYMLINK libspdk_log.so 00:02:17.681 SYMLINK libspdk_ut.so 00:02:17.681 CXX lib/trace_parser/trace.o 00:02:17.681 CC lib/util/base64.o 00:02:17.681 CC lib/util/bit_array.o 00:02:17.681 CC lib/util/cpuset.o 00:02:17.681 CC lib/util/crc16.o 00:02:17.681 CC lib/util/crc32c.o 00:02:17.681 CC lib/util/crc32.o 00:02:17.681 CC lib/util/crc32_ieee.o 00:02:17.681 CC lib/util/crc64.o 00:02:17.681 CC lib/util/dif.o 00:02:17.681 CC lib/util/fd.o 00:02:17.681 CC lib/util/file.o 00:02:17.681 CC lib/util/hexlify.o 00:02:17.681 CC lib/util/iov.o 00:02:17.681 CC lib/util/math.o 00:02:17.681 CC lib/util/pipe.o 00:02:17.681 CC lib/util/strerror_tls.o 00:02:17.681 CC lib/util/string.o 00:02:17.681 CC lib/util/uuid.o 00:02:17.681 CC lib/util/fd_group.o 00:02:17.681 CC lib/util/xor.o 00:02:17.681 CC lib/util/zipf.o 00:02:17.681 CC lib/ioat/ioat.o 00:02:17.681 CC lib/dma/dma.o 00:02:17.681 CC lib/vfio_user/host/vfio_user_pci.o 00:02:17.681 CC lib/vfio_user/host/vfio_user.o 00:02:17.681 LIB libspdk_dma.a 00:02:17.681 SO libspdk_dma.so.3.0 00:02:17.681 LIB libspdk_ioat.a 00:02:17.681 SYMLINK libspdk_dma.so 00:02:17.681 SO libspdk_ioat.so.6.0 00:02:17.682 LIB libspdk_vfio_user.a 00:02:17.682 SYMLINK libspdk_ioat.so 00:02:17.682 SO libspdk_vfio_user.so.4.0 00:02:17.682 LIB libspdk_util.a 00:02:17.682 SYMLINK libspdk_vfio_user.so 00:02:17.682 SO libspdk_util.so.8.0 00:02:17.682 SYMLINK libspdk_util.so 00:02:17.682 LIB libspdk_trace_parser.a 00:02:17.682 SO libspdk_trace_parser.so.4.0 00:02:17.682 SYMLINK libspdk_trace_parser.so 00:02:17.682 CC lib/env_dpdk/env.o 00:02:17.682 CC lib/env_dpdk/init.o 00:02:17.682 CC lib/env_dpdk/memory.o 00:02:17.682 CC lib/env_dpdk/pci.o 00:02:17.682 CC lib/env_dpdk/pci_virtio.o 00:02:17.682 CC lib/env_dpdk/threads.o 00:02:17.682 CC lib/env_dpdk/pci_ioat.o 00:02:17.682 CC lib/env_dpdk/pci_vmd.o 00:02:17.682 CC lib/env_dpdk/pci_idxd.o 00:02:17.682 CC lib/env_dpdk/pci_event.o 00:02:17.682 CC lib/env_dpdk/pci_dpdk.o 00:02:17.682 CC lib/env_dpdk/sigbus_handler.o 00:02:17.682 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:17.682 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:17.682 CC lib/rdma/common.o 00:02:17.682 CC lib/conf/conf.o 00:02:17.682 CC lib/rdma/rdma_verbs.o 00:02:17.682 CC lib/idxd/idxd.o 00:02:17.682 CC lib/json/json_parse.o 00:02:17.682 CC lib/idxd/idxd_user.o 00:02:17.682 CC lib/json/json_util.o 00:02:17.682 CC lib/idxd/idxd_kernel.o 00:02:17.682 CC lib/json/json_write.o 00:02:17.682 CC lib/vmd/vmd.o 00:02:17.682 CC lib/vmd/led.o 00:02:17.940 LIB libspdk_conf.a 00:02:17.940 SO libspdk_conf.so.5.0 00:02:17.940 LIB libspdk_json.a 00:02:17.940 SYMLINK libspdk_conf.so 00:02:17.940 LIB libspdk_rdma.a 00:02:17.940 SO libspdk_json.so.5.1 00:02:18.199 SO libspdk_rdma.so.5.0 00:02:18.199 SYMLINK libspdk_json.so 00:02:18.199 SYMLINK libspdk_rdma.so 00:02:18.199 LIB libspdk_idxd.a 00:02:18.199 SO libspdk_idxd.so.11.0 00:02:18.199 LIB libspdk_vmd.a 00:02:18.199 CC lib/jsonrpc/jsonrpc_server.o 00:02:18.199 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:18.199 CC lib/jsonrpc/jsonrpc_client.o 00:02:18.199 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:18.199 SYMLINK libspdk_idxd.so 00:02:18.199 SO libspdk_vmd.so.5.0 00:02:18.457 SYMLINK libspdk_vmd.so 00:02:18.457 LIB libspdk_jsonrpc.a 00:02:18.457 SO libspdk_jsonrpc.so.5.1 00:02:18.715 SYMLINK libspdk_jsonrpc.so 00:02:18.715 LIB libspdk_env_dpdk.a 00:02:18.715 SO libspdk_env_dpdk.so.13.0 00:02:18.715 CC lib/rpc/rpc.o 00:02:18.975 SYMLINK libspdk_env_dpdk.so 00:02:18.975 LIB libspdk_rpc.a 00:02:18.975 SO libspdk_rpc.so.5.0 00:02:18.975 SYMLINK libspdk_rpc.so 00:02:19.233 CC lib/notify/notify.o 00:02:19.233 CC lib/notify/notify_rpc.o 00:02:19.233 CC lib/trace/trace.o 00:02:19.233 CC lib/trace/trace_flags.o 00:02:19.233 CC lib/sock/sock.o 00:02:19.233 CC lib/trace/trace_rpc.o 00:02:19.233 CC lib/sock/sock_rpc.o 00:02:19.233 LIB libspdk_notify.a 00:02:19.492 SO libspdk_notify.so.5.0 00:02:19.492 LIB libspdk_trace.a 00:02:19.492 SYMLINK libspdk_notify.so 00:02:19.492 SO libspdk_trace.so.9.0 00:02:19.492 LIB libspdk_sock.a 00:02:19.492 SYMLINK libspdk_trace.so 00:02:19.492 SO libspdk_sock.so.8.0 00:02:19.751 SYMLINK libspdk_sock.so 00:02:19.751 CC lib/thread/iobuf.o 00:02:19.751 CC lib/thread/thread.o 00:02:19.751 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:19.751 CC lib/nvme/nvme_fabric.o 00:02:19.751 CC lib/nvme/nvme_ctrlr.o 00:02:19.751 CC lib/nvme/nvme_ns.o 00:02:19.751 CC lib/nvme/nvme_pcie_common.o 00:02:19.751 CC lib/nvme/nvme_ns_cmd.o 00:02:19.751 CC lib/nvme/nvme_qpair.o 00:02:19.751 CC lib/nvme/nvme.o 00:02:19.751 CC lib/nvme/nvme_pcie.o 00:02:19.751 CC lib/nvme/nvme_quirks.o 00:02:19.751 CC lib/nvme/nvme_transport.o 00:02:19.751 CC lib/nvme/nvme_discovery.o 00:02:19.751 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:19.751 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:19.751 CC lib/nvme/nvme_tcp.o 00:02:19.751 CC lib/nvme/nvme_opal.o 00:02:19.751 CC lib/nvme/nvme_io_msg.o 00:02:19.751 CC lib/nvme/nvme_poll_group.o 00:02:19.751 CC lib/nvme/nvme_zns.o 00:02:19.751 CC lib/nvme/nvme_cuse.o 00:02:19.751 CC lib/nvme/nvme_vfio_user.o 00:02:19.751 CC lib/nvme/nvme_rdma.o 00:02:20.688 LIB libspdk_thread.a 00:02:20.948 SO libspdk_thread.so.9.0 00:02:20.948 SYMLINK libspdk_thread.so 00:02:21.207 CC lib/init/json_config.o 00:02:21.207 CC lib/accel/accel.o 00:02:21.207 CC lib/accel/accel_rpc.o 00:02:21.207 CC lib/init/subsystem.o 00:02:21.207 CC lib/accel/accel_sw.o 00:02:21.207 CC lib/init/subsystem_rpc.o 00:02:21.207 CC lib/init/rpc.o 00:02:21.207 CC lib/vfu_tgt/tgt_endpoint.o 00:02:21.207 CC lib/vfu_tgt/tgt_rpc.o 00:02:21.207 CC lib/virtio/virtio.o 00:02:21.207 CC lib/blob/blobstore.o 00:02:21.207 CC lib/virtio/virtio_vfio_user.o 00:02:21.207 CC lib/blob/request.o 00:02:21.207 CC lib/blob/blob_bs_dev.o 00:02:21.207 CC lib/blob/zeroes.o 00:02:21.207 CC lib/virtio/virtio_vhost_user.o 00:02:21.207 CC lib/virtio/virtio_pci.o 00:02:21.207 LIB libspdk_init.a 00:02:21.466 LIB libspdk_nvme.a 00:02:21.466 SO libspdk_init.so.4.0 00:02:21.466 LIB libspdk_vfu_tgt.a 00:02:21.466 LIB libspdk_virtio.a 00:02:21.466 SO libspdk_vfu_tgt.so.2.0 00:02:21.466 SO libspdk_virtio.so.6.0 00:02:21.466 SYMLINK libspdk_init.so 00:02:21.466 SO libspdk_nvme.so.12.0 00:02:21.466 SYMLINK libspdk_vfu_tgt.so 00:02:21.466 SYMLINK libspdk_virtio.so 00:02:21.725 CC lib/event/app.o 00:02:21.725 CC lib/event/reactor.o 00:02:21.725 CC lib/event/log_rpc.o 00:02:21.725 CC lib/event/app_rpc.o 00:02:21.725 CC lib/event/scheduler_static.o 00:02:21.725 SYMLINK libspdk_nvme.so 00:02:21.725 LIB libspdk_accel.a 00:02:21.985 SO libspdk_accel.so.14.0 00:02:21.985 SYMLINK libspdk_accel.so 00:02:21.985 LIB libspdk_event.a 00:02:21.985 SO libspdk_event.so.12.0 00:02:21.985 SYMLINK libspdk_event.so 00:02:22.245 CC lib/bdev/bdev_rpc.o 00:02:22.245 CC lib/bdev/bdev.o 00:02:22.245 CC lib/bdev/bdev_zone.o 00:02:22.245 CC lib/bdev/part.o 00:02:22.245 CC lib/bdev/scsi_nvme.o 00:02:23.182 LIB libspdk_blob.a 00:02:23.182 SO libspdk_blob.so.10.1 00:02:23.182 SYMLINK libspdk_blob.so 00:02:23.182 CC lib/lvol/lvol.o 00:02:23.441 CC lib/blobfs/blobfs.o 00:02:23.441 CC lib/blobfs/tree.o 00:02:24.009 LIB libspdk_lvol.a 00:02:24.009 LIB libspdk_bdev.a 00:02:24.009 LIB libspdk_blobfs.a 00:02:24.009 SO libspdk_lvol.so.9.1 00:02:24.009 SO libspdk_bdev.so.14.0 00:02:24.009 SO libspdk_blobfs.so.9.0 00:02:24.009 SYMLINK libspdk_lvol.so 00:02:24.009 SYMLINK libspdk_bdev.so 00:02:24.009 SYMLINK libspdk_blobfs.so 00:02:24.267 CC lib/scsi/lun.o 00:02:24.267 CC lib/scsi/dev.o 00:02:24.267 CC lib/ftl/ftl_core.o 00:02:24.267 CC lib/ftl/ftl_debug.o 00:02:24.267 CC lib/ftl/ftl_init.o 00:02:24.267 CC lib/scsi/scsi.o 00:02:24.267 CC lib/ftl/ftl_layout.o 00:02:24.267 CC lib/scsi/scsi_pr.o 00:02:24.267 CC lib/scsi/scsi_bdev.o 00:02:24.267 CC lib/scsi/port.o 00:02:24.267 CC lib/ftl/ftl_io.o 00:02:24.267 CC lib/ftl/ftl_sb.o 00:02:24.267 CC lib/ftl/ftl_l2p.o 00:02:24.267 CC lib/scsi/scsi_rpc.o 00:02:24.267 CC lib/ftl/ftl_l2p_flat.o 00:02:24.267 CC lib/scsi/task.o 00:02:24.267 CC lib/ftl/ftl_nv_cache.o 00:02:24.267 CC lib/ftl/ftl_band.o 00:02:24.267 CC lib/ftl/ftl_writer.o 00:02:24.267 CC lib/ftl/ftl_band_ops.o 00:02:24.267 CC lib/ftl/ftl_l2p_cache.o 00:02:24.267 CC lib/ftl/ftl_rq.o 00:02:24.267 CC lib/ftl/ftl_reloc.o 00:02:24.267 CC lib/ftl/ftl_p2l.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:24.267 CC lib/nvmf/ctrlr.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:24.267 CC lib/nbd/nbd.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:24.267 CC lib/nbd/nbd_rpc.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:24.267 CC lib/ublk/ublk.o 00:02:24.267 CC lib/nvmf/ctrlr_discovery.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:24.267 CC lib/ublk/ublk_rpc.o 00:02:24.267 CC lib/nvmf/ctrlr_bdev.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:24.267 CC lib/nvmf/nvmf_rpc.o 00:02:24.267 CC lib/nvmf/subsystem.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:24.267 CC lib/nvmf/nvmf.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:24.267 CC lib/nvmf/transport.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:24.267 CC lib/nvmf/tcp.o 00:02:24.267 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:24.267 CC lib/ftl/utils/ftl_conf.o 00:02:24.267 CC lib/nvmf/vfio_user.o 00:02:24.267 CC lib/ftl/utils/ftl_md.o 00:02:24.267 CC lib/ftl/utils/ftl_mempool.o 00:02:24.267 CC lib/nvmf/rdma.o 00:02:24.267 CC lib/ftl/utils/ftl_property.o 00:02:24.267 CC lib/ftl/utils/ftl_bitmap.o 00:02:24.267 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:24.267 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:24.267 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:24.267 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:24.267 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:24.267 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:24.267 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:24.267 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:24.267 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:24.267 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:24.267 CC lib/ftl/base/ftl_base_bdev.o 00:02:24.267 CC lib/ftl/base/ftl_base_dev.o 00:02:24.267 CC lib/ftl/ftl_trace.o 00:02:24.835 LIB libspdk_nbd.a 00:02:24.835 SO libspdk_nbd.so.6.0 00:02:24.835 LIB libspdk_scsi.a 00:02:24.835 SYMLINK libspdk_nbd.so 00:02:24.835 SO libspdk_scsi.so.8.0 00:02:24.835 LIB libspdk_ublk.a 00:02:24.835 SO libspdk_ublk.so.2.0 00:02:24.835 SYMLINK libspdk_scsi.so 00:02:24.835 SYMLINK libspdk_ublk.so 00:02:25.094 CC lib/iscsi/conn.o 00:02:25.094 CC lib/iscsi/iscsi.o 00:02:25.094 CC lib/vhost/vhost.o 00:02:25.094 CC lib/iscsi/init_grp.o 00:02:25.094 CC lib/vhost/vhost_rpc.o 00:02:25.094 CC lib/iscsi/param.o 00:02:25.094 CC lib/vhost/vhost_scsi.o 00:02:25.094 CC lib/iscsi/md5.o 00:02:25.094 LIB libspdk_ftl.a 00:02:25.094 CC lib/vhost/vhost_blk.o 00:02:25.094 CC lib/vhost/rte_vhost_user.o 00:02:25.094 CC lib/iscsi/portal_grp.o 00:02:25.094 CC lib/iscsi/tgt_node.o 00:02:25.094 CC lib/iscsi/iscsi_rpc.o 00:02:25.094 CC lib/iscsi/iscsi_subsystem.o 00:02:25.094 CC lib/iscsi/task.o 00:02:25.094 SO libspdk_ftl.so.8.0 00:02:25.354 SYMLINK libspdk_ftl.so 00:02:25.924 LIB libspdk_vhost.a 00:02:25.924 SO libspdk_vhost.so.7.1 00:02:25.924 LIB libspdk_nvmf.a 00:02:25.924 SO libspdk_nvmf.so.17.0 00:02:25.924 SYMLINK libspdk_vhost.so 00:02:25.924 LIB libspdk_iscsi.a 00:02:26.184 SO libspdk_iscsi.so.7.0 00:02:26.184 SYMLINK libspdk_nvmf.so 00:02:26.184 SYMLINK libspdk_iscsi.so 00:02:26.444 CC module/vfu_device/vfu_virtio.o 00:02:26.444 CC module/vfu_device/vfu_virtio_scsi.o 00:02:26.444 CC module/vfu_device/vfu_virtio_rpc.o 00:02:26.444 CC module/vfu_device/vfu_virtio_blk.o 00:02:26.444 CC module/env_dpdk/env_dpdk_rpc.o 00:02:26.704 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:26.704 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:26.704 CC module/accel/error/accel_error_rpc.o 00:02:26.704 CC module/accel/error/accel_error.o 00:02:26.704 CC module/scheduler/gscheduler/gscheduler.o 00:02:26.704 CC module/accel/iaa/accel_iaa.o 00:02:26.704 CC module/accel/iaa/accel_iaa_rpc.o 00:02:26.704 CC module/accel/ioat/accel_ioat.o 00:02:26.704 CC module/accel/ioat/accel_ioat_rpc.o 00:02:26.704 CC module/accel/dsa/accel_dsa.o 00:02:26.704 CC module/accel/dsa/accel_dsa_rpc.o 00:02:26.704 CC module/sock/posix/posix.o 00:02:26.704 CC module/blob/bdev/blob_bdev.o 00:02:26.704 LIB libspdk_env_dpdk_rpc.a 00:02:26.704 SO libspdk_env_dpdk_rpc.so.5.0 00:02:26.704 SYMLINK libspdk_env_dpdk_rpc.so 00:02:26.704 LIB libspdk_scheduler_dpdk_governor.a 00:02:26.704 LIB libspdk_scheduler_gscheduler.a 00:02:26.704 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:26.704 LIB libspdk_accel_ioat.a 00:02:26.704 LIB libspdk_accel_error.a 00:02:26.704 LIB libspdk_scheduler_dynamic.a 00:02:26.704 SO libspdk_scheduler_gscheduler.so.3.0 00:02:26.704 LIB libspdk_accel_iaa.a 00:02:26.704 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:26.704 SO libspdk_accel_ioat.so.5.0 00:02:26.704 SO libspdk_scheduler_dynamic.so.3.0 00:02:26.704 SO libspdk_accel_error.so.1.0 00:02:26.704 LIB libspdk_accel_dsa.a 00:02:26.704 SO libspdk_accel_iaa.so.2.0 00:02:26.704 LIB libspdk_blob_bdev.a 00:02:26.704 SYMLINK libspdk_scheduler_gscheduler.so 00:02:26.963 SO libspdk_accel_dsa.so.4.0 00:02:26.963 SO libspdk_blob_bdev.so.10.1 00:02:26.963 SYMLINK libspdk_scheduler_dynamic.so 00:02:26.963 SYMLINK libspdk_accel_ioat.so 00:02:26.963 SYMLINK libspdk_accel_error.so 00:02:26.963 SYMLINK libspdk_accel_iaa.so 00:02:26.963 SYMLINK libspdk_accel_dsa.so 00:02:26.963 SYMLINK libspdk_blob_bdev.so 00:02:26.963 LIB libspdk_vfu_device.a 00:02:26.963 SO libspdk_vfu_device.so.2.0 00:02:26.963 SYMLINK libspdk_vfu_device.so 00:02:27.223 LIB libspdk_sock_posix.a 00:02:27.223 CC module/bdev/error/vbdev_error.o 00:02:27.223 CC module/bdev/error/vbdev_error_rpc.o 00:02:27.223 CC module/bdev/nvme/bdev_nvme.o 00:02:27.223 SO libspdk_sock_posix.so.5.0 00:02:27.223 CC module/bdev/nvme/nvme_rpc.o 00:02:27.223 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.223 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.223 CC module/bdev/nvme/vbdev_opal.o 00:02:27.223 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:27.223 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.223 CC module/bdev/ftl/bdev_ftl.o 00:02:27.223 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:27.223 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.223 CC module/bdev/malloc/bdev_malloc.o 00:02:27.223 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:27.223 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:27.223 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:27.223 CC module/bdev/delay/vbdev_delay.o 00:02:27.223 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:27.223 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.223 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.223 CC module/bdev/raid/bdev_raid_rpc.o 00:02:27.223 CC module/bdev/gpt/gpt.o 00:02:27.223 CC module/bdev/raid/bdev_raid.o 00:02:27.223 CC module/bdev/gpt/vbdev_gpt.o 00:02:27.223 CC module/bdev/raid/bdev_raid_sb.o 00:02:27.223 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.223 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.223 CC module/bdev/raid/raid0.o 00:02:27.223 CC module/bdev/raid/raid1.o 00:02:27.223 CC module/bdev/raid/concat.o 00:02:27.223 CC module/bdev/aio/bdev_aio.o 00:02:27.223 CC module/bdev/lvol/vbdev_lvol.o 00:02:27.223 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.223 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:27.223 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.223 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.223 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.223 CC module/bdev/split/vbdev_split.o 00:02:27.223 CC module/bdev/null/bdev_null.o 00:02:27.223 CC module/bdev/null/bdev_null_rpc.o 00:02:27.223 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.223 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.223 SYMLINK libspdk_sock_posix.so 00:02:27.482 LIB libspdk_blobfs_bdev.a 00:02:27.482 LIB libspdk_bdev_error.a 00:02:27.482 SO libspdk_blobfs_bdev.so.5.0 00:02:27.482 LIB libspdk_bdev_split.a 00:02:27.482 SO libspdk_bdev_error.so.5.0 00:02:27.482 LIB libspdk_bdev_null.a 00:02:27.482 SO libspdk_bdev_split.so.5.0 00:02:27.482 LIB libspdk_bdev_gpt.a 00:02:27.482 SYMLINK libspdk_blobfs_bdev.so 00:02:27.482 LIB libspdk_bdev_ftl.a 00:02:27.482 SO libspdk_bdev_null.so.5.0 00:02:27.482 LIB libspdk_bdev_passthru.a 00:02:27.482 SO libspdk_bdev_gpt.so.5.0 00:02:27.482 SYMLINK libspdk_bdev_error.so 00:02:27.482 SO libspdk_bdev_ftl.so.5.0 00:02:27.482 LIB libspdk_bdev_malloc.a 00:02:27.482 LIB libspdk_bdev_zone_block.a 00:02:27.482 LIB libspdk_bdev_iscsi.a 00:02:27.482 SO libspdk_bdev_passthru.so.5.0 00:02:27.482 LIB libspdk_bdev_aio.a 00:02:27.482 SYMLINK libspdk_bdev_split.so 00:02:27.482 SO libspdk_bdev_malloc.so.5.0 00:02:27.482 SYMLINK libspdk_bdev_null.so 00:02:27.482 SO libspdk_bdev_zone_block.so.5.0 00:02:27.482 LIB libspdk_bdev_delay.a 00:02:27.482 SO libspdk_bdev_iscsi.so.5.0 00:02:27.482 SYMLINK libspdk_bdev_gpt.so 00:02:27.482 SO libspdk_bdev_aio.so.5.0 00:02:27.742 SYMLINK libspdk_bdev_ftl.so 00:02:27.742 SYMLINK libspdk_bdev_passthru.so 00:02:27.742 SO libspdk_bdev_delay.so.5.0 00:02:27.742 SYMLINK libspdk_bdev_malloc.so 00:02:27.742 LIB libspdk_bdev_lvol.a 00:02:27.742 SYMLINK libspdk_bdev_zone_block.so 00:02:27.742 SYMLINK libspdk_bdev_iscsi.so 00:02:27.742 SYMLINK libspdk_bdev_aio.so 00:02:27.742 LIB libspdk_bdev_virtio.a 00:02:27.742 SO libspdk_bdev_lvol.so.5.0 00:02:27.742 SYMLINK libspdk_bdev_delay.so 00:02:27.742 SO libspdk_bdev_virtio.so.5.0 00:02:27.742 SYMLINK libspdk_bdev_virtio.so 00:02:27.742 SYMLINK libspdk_bdev_lvol.so 00:02:28.002 LIB libspdk_bdev_raid.a 00:02:28.002 SO libspdk_bdev_raid.so.5.0 00:02:28.002 SYMLINK libspdk_bdev_raid.so 00:02:28.940 LIB libspdk_bdev_nvme.a 00:02:28.940 SO libspdk_bdev_nvme.so.6.0 00:02:28.940 SYMLINK libspdk_bdev_nvme.so 00:02:29.199 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:29.199 CC module/event/subsystems/vmd/vmd.o 00:02:29.199 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:29.199 CC module/event/subsystems/scheduler/scheduler.o 00:02:29.199 CC module/event/subsystems/iobuf/iobuf.o 00:02:29.199 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:29.199 CC module/event/subsystems/sock/sock.o 00:02:29.199 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:29.459 LIB libspdk_event_scheduler.a 00:02:29.459 LIB libspdk_event_vmd.a 00:02:29.459 LIB libspdk_event_vfu_tgt.a 00:02:29.459 LIB libspdk_event_vhost_blk.a 00:02:29.459 LIB libspdk_event_sock.a 00:02:29.459 SO libspdk_event_scheduler.so.3.0 00:02:29.459 SO libspdk_event_vmd.so.5.0 00:02:29.459 SO libspdk_event_vfu_tgt.so.2.0 00:02:29.459 LIB libspdk_event_iobuf.a 00:02:29.459 SO libspdk_event_sock.so.4.0 00:02:29.459 SO libspdk_event_vhost_blk.so.2.0 00:02:29.459 SYMLINK libspdk_event_scheduler.so 00:02:29.459 SO libspdk_event_iobuf.so.2.0 00:02:29.459 SYMLINK libspdk_event_vmd.so 00:02:29.459 SYMLINK libspdk_event_vfu_tgt.so 00:02:29.459 SYMLINK libspdk_event_sock.so 00:02:29.459 SYMLINK libspdk_event_vhost_blk.so 00:02:29.459 SYMLINK libspdk_event_iobuf.so 00:02:29.719 CC module/event/subsystems/accel/accel.o 00:02:29.719 LIB libspdk_event_accel.a 00:02:29.719 SO libspdk_event_accel.so.5.0 00:02:29.977 SYMLINK libspdk_event_accel.so 00:02:29.977 CC module/event/subsystems/bdev/bdev.o 00:02:30.236 LIB libspdk_event_bdev.a 00:02:30.236 SO libspdk_event_bdev.so.5.0 00:02:30.236 SYMLINK libspdk_event_bdev.so 00:02:30.495 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:30.495 CC module/event/subsystems/ublk/ublk.o 00:02:30.495 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:30.495 CC module/event/subsystems/scsi/scsi.o 00:02:30.495 CC module/event/subsystems/nbd/nbd.o 00:02:30.495 LIB libspdk_event_ublk.a 00:02:30.495 SO libspdk_event_ublk.so.2.0 00:02:30.755 LIB libspdk_event_scsi.a 00:02:30.755 LIB libspdk_event_nbd.a 00:02:30.755 SO libspdk_event_nbd.so.5.0 00:02:30.755 SYMLINK libspdk_event_ublk.so 00:02:30.755 LIB libspdk_event_nvmf.a 00:02:30.755 SO libspdk_event_scsi.so.5.0 00:02:30.755 SO libspdk_event_nvmf.so.5.0 00:02:30.755 SYMLINK libspdk_event_nbd.so 00:02:30.755 SYMLINK libspdk_event_scsi.so 00:02:30.755 SYMLINK libspdk_event_nvmf.so 00:02:31.015 CC module/event/subsystems/iscsi/iscsi.o 00:02:31.015 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:31.015 LIB libspdk_event_vhost_scsi.a 00:02:31.015 LIB libspdk_event_iscsi.a 00:02:31.015 SO libspdk_event_vhost_scsi.so.2.0 00:02:31.015 SO libspdk_event_iscsi.so.5.0 00:02:31.275 SYMLINK libspdk_event_vhost_scsi.so 00:02:31.275 SYMLINK libspdk_event_iscsi.so 00:02:31.275 SO libspdk.so.5.0 00:02:31.275 SYMLINK libspdk.so 00:02:31.544 CC app/trace_record/trace_record.o 00:02:31.544 CXX app/trace/trace.o 00:02:31.544 CC app/spdk_nvme_perf/perf.o 00:02:31.544 CC app/spdk_lspci/spdk_lspci.o 00:02:31.544 CC app/spdk_top/spdk_top.o 00:02:31.544 TEST_HEADER include/spdk/accel.h 00:02:31.544 TEST_HEADER include/spdk/accel_module.h 00:02:31.544 TEST_HEADER include/spdk/assert.h 00:02:31.544 TEST_HEADER include/spdk/barrier.h 00:02:31.544 TEST_HEADER include/spdk/bdev.h 00:02:31.544 TEST_HEADER include/spdk/bdev_module.h 00:02:31.544 TEST_HEADER include/spdk/bdev_zone.h 00:02:31.544 CC app/spdk_nvme_identify/identify.o 00:02:31.544 TEST_HEADER include/spdk/base64.h 00:02:31.544 TEST_HEADER include/spdk/bit_pool.h 00:02:31.544 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:31.544 TEST_HEADER include/spdk/bit_array.h 00:02:31.544 TEST_HEADER include/spdk/blob_bdev.h 00:02:31.544 TEST_HEADER include/spdk/blobfs.h 00:02:31.544 TEST_HEADER include/spdk/conf.h 00:02:31.544 TEST_HEADER include/spdk/blob.h 00:02:31.544 TEST_HEADER include/spdk/config.h 00:02:31.544 TEST_HEADER include/spdk/crc16.h 00:02:31.544 CC app/spdk_nvme_discover/discovery_aer.o 00:02:31.544 TEST_HEADER include/spdk/crc32.h 00:02:31.544 TEST_HEADER include/spdk/cpuset.h 00:02:31.544 TEST_HEADER include/spdk/dif.h 00:02:31.544 TEST_HEADER include/spdk/crc64.h 00:02:31.544 TEST_HEADER include/spdk/dma.h 00:02:31.544 TEST_HEADER include/spdk/endian.h 00:02:31.544 CC test/rpc_client/rpc_client_test.o 00:02:31.544 TEST_HEADER include/spdk/env_dpdk.h 00:02:31.544 TEST_HEADER include/spdk/fd_group.h 00:02:31.544 TEST_HEADER include/spdk/env.h 00:02:31.544 TEST_HEADER include/spdk/file.h 00:02:31.544 TEST_HEADER include/spdk/event.h 00:02:31.544 TEST_HEADER include/spdk/fd.h 00:02:31.544 TEST_HEADER include/spdk/gpt_spec.h 00:02:31.544 TEST_HEADER include/spdk/hexlify.h 00:02:31.544 TEST_HEADER include/spdk/histogram_data.h 00:02:31.544 TEST_HEADER include/spdk/ftl.h 00:02:31.544 CC app/nvmf_tgt/nvmf_main.o 00:02:31.544 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:31.544 TEST_HEADER include/spdk/idxd.h 00:02:31.544 TEST_HEADER include/spdk/init.h 00:02:31.544 TEST_HEADER include/spdk/ioat.h 00:02:31.544 TEST_HEADER include/spdk/ioat_spec.h 00:02:31.544 TEST_HEADER include/spdk/idxd_spec.h 00:02:31.544 TEST_HEADER include/spdk/iscsi_spec.h 00:02:31.544 CC app/spdk_dd/spdk_dd.o 00:02:31.544 TEST_HEADER include/spdk/likely.h 00:02:31.544 TEST_HEADER include/spdk/jsonrpc.h 00:02:31.544 TEST_HEADER include/spdk/json.h 00:02:31.544 TEST_HEADER include/spdk/log.h 00:02:31.544 TEST_HEADER include/spdk/lvol.h 00:02:31.544 TEST_HEADER include/spdk/memory.h 00:02:31.544 TEST_HEADER include/spdk/mmio.h 00:02:31.544 TEST_HEADER include/spdk/nbd.h 00:02:31.544 TEST_HEADER include/spdk/notify.h 00:02:31.544 TEST_HEADER include/spdk/nvme.h 00:02:31.544 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:31.544 TEST_HEADER include/spdk/nvme_intel.h 00:02:31.544 TEST_HEADER include/spdk/nvme_zns.h 00:02:31.544 TEST_HEADER include/spdk/nvme_spec.h 00:02:31.544 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:31.544 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:31.544 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:31.544 TEST_HEADER include/spdk/nvmf.h 00:02:31.544 CC app/vhost/vhost.o 00:02:31.544 TEST_HEADER include/spdk/nvmf_spec.h 00:02:31.544 TEST_HEADER include/spdk/nvmf_transport.h 00:02:31.544 TEST_HEADER include/spdk/pci_ids.h 00:02:31.544 TEST_HEADER include/spdk/pipe.h 00:02:31.544 TEST_HEADER include/spdk/opal.h 00:02:31.544 TEST_HEADER include/spdk/opal_spec.h 00:02:31.544 TEST_HEADER include/spdk/queue.h 00:02:31.544 TEST_HEADER include/spdk/rpc.h 00:02:31.544 TEST_HEADER include/spdk/reduce.h 00:02:31.544 TEST_HEADER include/spdk/scheduler.h 00:02:31.544 TEST_HEADER include/spdk/scsi.h 00:02:31.544 TEST_HEADER include/spdk/sock.h 00:02:31.544 TEST_HEADER include/spdk/scsi_spec.h 00:02:31.544 TEST_HEADER include/spdk/stdinc.h 00:02:31.545 TEST_HEADER include/spdk/trace.h 00:02:31.545 TEST_HEADER include/spdk/string.h 00:02:31.545 TEST_HEADER include/spdk/thread.h 00:02:31.545 TEST_HEADER include/spdk/trace_parser.h 00:02:31.545 CC app/spdk_tgt/spdk_tgt.o 00:02:31.545 TEST_HEADER include/spdk/ublk.h 00:02:31.545 TEST_HEADER include/spdk/tree.h 00:02:31.545 TEST_HEADER include/spdk/util.h 00:02:31.545 TEST_HEADER include/spdk/uuid.h 00:02:31.545 TEST_HEADER include/spdk/version.h 00:02:31.545 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:31.545 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:31.545 TEST_HEADER include/spdk/vmd.h 00:02:31.545 TEST_HEADER include/spdk/vhost.h 00:02:31.545 CC app/iscsi_tgt/iscsi_tgt.o 00:02:31.545 TEST_HEADER include/spdk/xor.h 00:02:31.545 TEST_HEADER include/spdk/zipf.h 00:02:31.545 CC examples/sock/hello_world/hello_sock.o 00:02:31.545 CXX test/cpp_headers/accel.o 00:02:31.545 CXX test/cpp_headers/accel_module.o 00:02:31.545 CXX test/cpp_headers/base64.o 00:02:31.545 CXX test/cpp_headers/assert.o 00:02:31.545 CXX test/cpp_headers/barrier.o 00:02:31.545 CXX test/cpp_headers/bdev_zone.o 00:02:31.545 CXX test/cpp_headers/bdev.o 00:02:31.545 CXX test/cpp_headers/bdev_module.o 00:02:31.545 CXX test/cpp_headers/bit_array.o 00:02:31.545 CXX test/cpp_headers/blobfs_bdev.o 00:02:31.545 CXX test/cpp_headers/bit_pool.o 00:02:31.545 CXX test/cpp_headers/blob_bdev.o 00:02:31.545 CXX test/cpp_headers/blobfs.o 00:02:31.545 CXX test/cpp_headers/blob.o 00:02:31.545 CXX test/cpp_headers/config.o 00:02:31.545 CXX test/cpp_headers/conf.o 00:02:31.545 CXX test/cpp_headers/cpuset.o 00:02:31.545 CXX test/cpp_headers/crc16.o 00:02:31.545 CXX test/cpp_headers/dif.o 00:02:31.545 CXX test/cpp_headers/crc32.o 00:02:31.545 CXX test/cpp_headers/crc64.o 00:02:31.545 CC examples/nvme/reconnect/reconnect.o 00:02:31.545 CC examples/ioat/verify/verify.o 00:02:31.545 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:31.545 CC test/app/histogram_perf/histogram_perf.o 00:02:31.545 CC examples/nvme/hotplug/hotplug.o 00:02:31.545 CC examples/ioat/perf/perf.o 00:02:31.545 CC examples/nvme/arbitration/arbitration.o 00:02:31.545 CC test/env/vtophys/vtophys.o 00:02:31.545 CC examples/nvme/hello_world/hello_world.o 00:02:31.808 CC examples/accel/perf/accel_perf.o 00:02:31.808 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:31.808 CC test/env/pci/pci_ut.o 00:02:31.808 CC test/env/memory/memory_ut.o 00:02:31.808 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:31.808 CC examples/nvme/abort/abort.o 00:02:31.808 CC examples/vmd/lsvmd/lsvmd.o 00:02:31.808 CC examples/vmd/led/led.o 00:02:31.808 CC app/fio/nvme/fio_plugin.o 00:02:31.808 CC test/event/event_perf/event_perf.o 00:02:31.808 CC test/app/stub/stub.o 00:02:31.808 CC test/nvme/e2edp/nvme_dp.o 00:02:31.808 CC examples/blob/hello_world/hello_blob.o 00:02:31.808 CC examples/idxd/perf/perf.o 00:02:31.808 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:31.808 CC test/nvme/fdp/fdp.o 00:02:31.808 CC examples/util/zipf/zipf.o 00:02:31.808 CC test/nvme/reserve/reserve.o 00:02:31.808 CC test/event/reactor/reactor.o 00:02:31.808 CC test/nvme/err_injection/err_injection.o 00:02:31.808 CC test/nvme/simple_copy/simple_copy.o 00:02:31.808 CC test/event/reactor_perf/reactor_perf.o 00:02:31.808 CC examples/thread/thread/thread_ex.o 00:02:31.808 CC test/nvme/compliance/nvme_compliance.o 00:02:31.808 CC test/app/jsoncat/jsoncat.o 00:02:31.808 CC examples/blob/cli/blobcli.o 00:02:31.808 CC test/nvme/cuse/cuse.o 00:02:31.808 CC app/fio/bdev/fio_plugin.o 00:02:31.808 CC test/nvme/reset/reset.o 00:02:31.808 CC test/nvme/sgl/sgl.o 00:02:31.808 CC test/bdev/bdevio/bdevio.o 00:02:31.808 CC test/nvme/boot_partition/boot_partition.o 00:02:31.808 CC test/dma/test_dma/test_dma.o 00:02:31.808 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:31.808 CC test/nvme/aer/aer.o 00:02:31.808 CC test/nvme/overhead/overhead.o 00:02:31.808 CC test/thread/poller_perf/poller_perf.o 00:02:31.808 CC test/nvme/startup/startup.o 00:02:31.808 CC test/app/bdev_svc/bdev_svc.o 00:02:31.808 CC examples/bdev/bdevperf/bdevperf.o 00:02:31.808 CC test/nvme/fused_ordering/fused_ordering.o 00:02:31.808 CC examples/nvmf/nvmf/nvmf.o 00:02:31.808 CC test/nvme/connect_stress/connect_stress.o 00:02:31.808 CC examples/bdev/hello_world/hello_bdev.o 00:02:31.808 CC test/event/scheduler/scheduler.o 00:02:31.808 CC test/event/app_repeat/app_repeat.o 00:02:31.808 CC test/accel/dif/dif.o 00:02:31.808 CC test/blobfs/mkfs/mkfs.o 00:02:31.808 LINK spdk_lspci 00:02:31.808 CC test/env/mem_callbacks/mem_callbacks.o 00:02:31.808 CC test/lvol/esnap/esnap.o 00:02:31.808 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:32.090 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:32.090 LINK nvmf_tgt 00:02:32.090 LINK spdk_nvme_discover 00:02:32.090 LINK vhost 00:02:32.090 LINK spdk_trace_record 00:02:32.090 LINK led 00:02:32.090 LINK iscsi_tgt 00:02:32.090 LINK cmb_copy 00:02:32.090 LINK jsoncat 00:02:32.090 CXX test/cpp_headers/dma.o 00:02:32.090 LINK poller_perf 00:02:32.090 CXX test/cpp_headers/endian.o 00:02:32.090 CXX test/cpp_headers/env_dpdk.o 00:02:32.090 CXX test/cpp_headers/env.o 00:02:32.090 CXX test/cpp_headers/event.o 00:02:32.090 CXX test/cpp_headers/fd_group.o 00:02:32.090 CXX test/cpp_headers/fd.o 00:02:32.090 CXX test/cpp_headers/file.o 00:02:32.090 CXX test/cpp_headers/gpt_spec.o 00:02:32.090 CXX test/cpp_headers/ftl.o 00:02:32.090 LINK interrupt_tgt 00:02:32.090 LINK spdk_tgt 00:02:32.090 LINK rpc_client_test 00:02:32.090 LINK hello_world 00:02:32.090 CXX test/cpp_headers/hexlify.o 00:02:32.090 LINK bdev_svc 00:02:32.090 LINK vtophys 00:02:32.090 LINK lsvmd 00:02:32.090 LINK hotplug 00:02:32.090 CXX test/cpp_headers/histogram_data.o 00:02:32.090 CXX test/cpp_headers/idxd.o 00:02:32.090 LINK hello_blob 00:02:32.090 LINK histogram_perf 00:02:32.392 LINK event_perf 00:02:32.392 LINK env_dpdk_post_init 00:02:32.392 LINK reactor 00:02:32.392 LINK reactor_perf 00:02:32.392 LINK zipf 00:02:32.392 LINK thread 00:02:32.392 LINK nvme_dp 00:02:32.392 LINK mem_callbacks 00:02:32.392 LINK stub 00:02:32.392 LINK spdk_dd 00:02:32.392 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:32.392 LINK app_repeat 00:02:32.392 LINK hello_bdev 00:02:32.392 LINK startup 00:02:32.392 LINK boot_partition 00:02:32.392 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:32.392 LINK pmr_persistence 00:02:32.392 LINK hello_sock 00:02:32.392 LINK err_injection 00:02:32.392 LINK verify 00:02:32.392 LINK doorbell_aers 00:02:32.392 LINK ioat_perf 00:02:32.392 LINK reserve 00:02:32.392 CXX test/cpp_headers/idxd_spec.o 00:02:32.392 LINK connect_stress 00:02:32.392 LINK reconnect 00:02:32.392 CXX test/cpp_headers/init.o 00:02:32.392 LINK spdk_trace 00:02:32.392 LINK fdp 00:02:32.392 LINK aer 00:02:32.392 LINK nvmf 00:02:32.392 CXX test/cpp_headers/ioat.o 00:02:32.392 LINK fused_ordering 00:02:32.392 CXX test/cpp_headers/ioat_spec.o 00:02:32.392 CXX test/cpp_headers/iscsi_spec.o 00:02:32.392 CXX test/cpp_headers/json.o 00:02:32.392 CXX test/cpp_headers/jsonrpc.o 00:02:32.392 CXX test/cpp_headers/likely.o 00:02:32.392 LINK simple_copy 00:02:32.392 LINK mkfs 00:02:32.392 CXX test/cpp_headers/log.o 00:02:32.392 CXX test/cpp_headers/lvol.o 00:02:32.392 LINK scheduler 00:02:32.392 CXX test/cpp_headers/memory.o 00:02:32.392 LINK reset 00:02:32.392 CXX test/cpp_headers/mmio.o 00:02:32.392 CXX test/cpp_headers/nbd.o 00:02:32.392 CXX test/cpp_headers/notify.o 00:02:32.392 LINK sgl 00:02:32.392 LINK accel_perf 00:02:32.392 LINK overhead 00:02:32.392 CXX test/cpp_headers/nvme.o 00:02:32.392 LINK arbitration 00:02:32.392 CXX test/cpp_headers/nvme_intel.o 00:02:32.392 CXX test/cpp_headers/nvme_ocssd.o 00:02:32.392 LINK nvme_manage 00:02:32.654 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:32.654 CXX test/cpp_headers/nvme_spec.o 00:02:32.654 CXX test/cpp_headers/nvme_zns.o 00:02:32.654 CXX test/cpp_headers/nvmf_cmd.o 00:02:32.654 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:32.654 CXX test/cpp_headers/nvmf.o 00:02:32.654 CXX test/cpp_headers/nvmf_spec.o 00:02:32.654 LINK idxd_perf 00:02:32.654 CXX test/cpp_headers/nvmf_transport.o 00:02:32.654 CXX test/cpp_headers/opal_spec.o 00:02:32.654 CXX test/cpp_headers/opal.o 00:02:32.654 CXX test/cpp_headers/pipe.o 00:02:32.654 CXX test/cpp_headers/pci_ids.o 00:02:32.654 LINK nvme_compliance 00:02:32.654 LINK blobcli 00:02:32.654 CXX test/cpp_headers/queue.o 00:02:32.654 CXX test/cpp_headers/rpc.o 00:02:32.654 CXX test/cpp_headers/reduce.o 00:02:32.654 CXX test/cpp_headers/scheduler.o 00:02:32.654 CXX test/cpp_headers/scsi.o 00:02:32.654 CXX test/cpp_headers/sock.o 00:02:32.654 CXX test/cpp_headers/scsi_spec.o 00:02:32.654 CXX test/cpp_headers/stdinc.o 00:02:32.654 LINK pci_ut 00:02:32.654 LINK abort 00:02:32.654 CXX test/cpp_headers/string.o 00:02:32.654 CXX test/cpp_headers/thread.o 00:02:32.654 CXX test/cpp_headers/trace.o 00:02:32.654 CXX test/cpp_headers/trace_parser.o 00:02:32.654 CXX test/cpp_headers/tree.o 00:02:32.654 CXX test/cpp_headers/ublk.o 00:02:32.654 CXX test/cpp_headers/util.o 00:02:32.654 CXX test/cpp_headers/uuid.o 00:02:32.654 LINK nvme_fuzz 00:02:32.654 LINK bdevio 00:02:32.654 LINK spdk_nvme 00:02:32.654 CXX test/cpp_headers/version.o 00:02:32.654 CXX test/cpp_headers/vfio_user_pci.o 00:02:32.654 CXX test/cpp_headers/vfio_user_spec.o 00:02:32.654 LINK memory_ut 00:02:32.654 CXX test/cpp_headers/vhost.o 00:02:32.654 CXX test/cpp_headers/vmd.o 00:02:32.654 LINK dif 00:02:32.654 LINK spdk_bdev 00:02:32.654 LINK test_dma 00:02:32.654 CXX test/cpp_headers/xor.o 00:02:32.654 CXX test/cpp_headers/zipf.o 00:02:32.913 LINK vhost_fuzz 00:02:32.913 LINK spdk_nvme_identify 00:02:32.913 LINK bdevperf 00:02:32.913 LINK spdk_nvme_perf 00:02:32.913 LINK spdk_top 00:02:33.172 LINK cuse 00:02:33.740 LINK iscsi_fuzz 00:02:35.643 LINK esnap 00:02:35.902 00:02:35.902 real 0m29.253s 00:02:35.902 user 5m1.454s 00:02:35.902 sys 2m22.536s 00:02:35.902 22:01:30 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:35.902 22:01:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.902 ************************************ 00:02:35.902 END TEST make 00:02:35.902 ************************************ 00:02:35.902 22:01:30 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:35.902 22:01:30 -- nvmf/common.sh@7 -- # uname -s 00:02:35.902 22:01:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:35.902 22:01:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:35.902 22:01:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:35.902 22:01:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:35.902 22:01:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:35.902 22:01:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:35.902 22:01:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:35.902 22:01:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:35.902 22:01:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:35.902 22:01:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:35.902 22:01:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:35.902 22:01:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:35.902 22:01:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:35.902 22:01:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:35.902 22:01:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:35.902 22:01:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:35.902 22:01:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:35.902 22:01:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.902 22:01:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.902 22:01:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.902 22:01:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.902 22:01:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.902 22:01:30 -- paths/export.sh@5 -- # export PATH 00:02:35.902 22:01:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.902 22:01:30 -- nvmf/common.sh@46 -- # : 0 00:02:35.902 22:01:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:35.902 22:01:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:35.902 22:01:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:35.902 22:01:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:35.902 22:01:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:35.902 22:01:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:35.902 22:01:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:35.902 22:01:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:35.902 22:01:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:35.902 22:01:30 -- spdk/autotest.sh@32 -- # uname -s 00:02:35.902 22:01:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:35.902 22:01:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:35.902 22:01:30 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.902 22:01:30 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:35.902 22:01:30 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:35.902 22:01:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:35.902 22:01:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:35.902 22:01:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:35.902 22:01:30 -- spdk/autotest.sh@48 -- # udevadm_pid=3323787 00:02:35.902 22:01:30 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:35.902 22:01:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:35.902 22:01:30 -- spdk/autotest.sh@54 -- # echo 3323789 00:02:35.903 22:01:30 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:35.903 22:01:30 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:35.903 22:01:30 -- spdk/autotest.sh@56 -- # echo 3323790 00:02:35.903 22:01:30 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:35.903 22:01:30 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:35.903 22:01:30 -- spdk/autotest.sh@60 -- # echo 3323791 00:02:35.903 22:01:30 -- spdk/autotest.sh@62 -- # echo 3323792 00:02:35.903 22:01:30 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:35.903 22:01:30 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:35.903 22:01:30 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:35.903 22:01:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:35.903 22:01:30 -- common/autotest_common.sh@10 -- # set +x 00:02:35.903 22:01:31 -- spdk/autotest.sh@70 -- # create_test_list 00:02:35.903 22:01:31 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:35.903 22:01:31 -- common/autotest_common.sh@10 -- # set +x 00:02:35.903 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:35.903 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:36.161 22:01:31 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:36.161 22:01:31 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.161 22:01:31 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.161 22:01:31 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:36.161 22:01:31 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.161 22:01:31 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:36.161 22:01:31 -- common/autotest_common.sh@1440 -- # uname 00:02:36.161 22:01:31 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:36.161 22:01:31 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:36.161 22:01:31 -- common/autotest_common.sh@1460 -- # uname 00:02:36.161 22:01:31 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:36.161 22:01:31 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:36.161 22:01:31 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:36.161 22:01:31 -- spdk/autotest.sh@83 -- # hash lcov 00:02:36.161 22:01:31 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:36.161 22:01:31 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:36.161 --rc lcov_branch_coverage=1 00:02:36.161 --rc lcov_function_coverage=1 00:02:36.161 --rc genhtml_branch_coverage=1 00:02:36.161 --rc genhtml_function_coverage=1 00:02:36.161 --rc genhtml_legend=1 00:02:36.161 --rc geninfo_all_blocks=1 00:02:36.161 ' 00:02:36.161 22:01:31 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:36.161 --rc lcov_branch_coverage=1 00:02:36.161 --rc lcov_function_coverage=1 00:02:36.161 --rc genhtml_branch_coverage=1 00:02:36.161 --rc genhtml_function_coverage=1 00:02:36.161 --rc genhtml_legend=1 00:02:36.161 --rc geninfo_all_blocks=1 00:02:36.161 ' 00:02:36.161 22:01:31 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:36.161 --rc lcov_branch_coverage=1 00:02:36.161 --rc lcov_function_coverage=1 00:02:36.161 --rc genhtml_branch_coverage=1 00:02:36.161 --rc genhtml_function_coverage=1 00:02:36.161 --rc genhtml_legend=1 00:02:36.161 --rc geninfo_all_blocks=1 00:02:36.161 --no-external' 00:02:36.161 22:01:31 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:36.161 --rc lcov_branch_coverage=1 00:02:36.161 --rc lcov_function_coverage=1 00:02:36.161 --rc genhtml_branch_coverage=1 00:02:36.161 --rc genhtml_function_coverage=1 00:02:36.161 --rc genhtml_legend=1 00:02:36.161 --rc geninfo_all_blocks=1 00:02:36.161 --no-external' 00:02:36.161 22:01:31 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:36.161 lcov: LCOV version 1.14 00:02:36.161 22:01:31 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:38.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:38.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:38.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:38.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:38.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:38.695 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:56.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:56.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:56.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:56.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:56.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:56.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:56.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:56.775 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:56.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:56.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:56.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:56.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:56.777 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:59.310 22:01:54 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:59.310 22:01:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:59.310 22:01:54 -- common/autotest_common.sh@10 -- # set +x 00:02:59.310 22:01:54 -- spdk/autotest.sh@102 -- # rm -f 00:02:59.310 22:01:54 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.842 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:01.842 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:01.842 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:02.101 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:02.359 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:02.359 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:02.359 22:01:57 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:02.359 22:01:57 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:02.359 22:01:57 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:02.359 22:01:57 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:02.359 22:01:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:02.359 22:01:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:02.359 22:01:57 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:02.359 22:01:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.359 22:01:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:02.359 22:01:57 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:02.359 22:01:57 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:02.359 22:01:57 -- spdk/autotest.sh@121 -- # grep -v p 00:03:02.359 22:01:57 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:02.359 22:01:57 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:02.359 22:01:57 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:02.359 22:01:57 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:02.359 22:01:57 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:02.359 No valid GPT data, bailing 00:03:02.359 22:01:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:02.359 22:01:57 -- scripts/common.sh@393 -- # pt= 00:03:02.359 22:01:57 -- scripts/common.sh@394 -- # return 1 00:03:02.359 22:01:57 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:02.359 1+0 records in 00:03:02.359 1+0 records out 00:03:02.359 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00602538 s, 174 MB/s 00:03:02.359 22:01:57 -- spdk/autotest.sh@129 -- # sync 00:03:02.359 22:01:57 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:02.359 22:01:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:02.359 22:01:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:07.631 22:02:02 -- spdk/autotest.sh@135 -- # uname -s 00:03:07.631 22:02:02 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:07.631 22:02:02 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:07.631 22:02:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:07.631 22:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:07.631 22:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:07.631 ************************************ 00:03:07.631 START TEST setup.sh 00:03:07.631 ************************************ 00:03:07.631 22:02:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:07.631 * Looking for test storage... 00:03:07.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:07.631 22:02:02 -- setup/test-setup.sh@10 -- # uname -s 00:03:07.631 22:02:02 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:07.631 22:02:02 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:07.631 22:02:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:07.631 22:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:07.631 22:02:02 -- common/autotest_common.sh@10 -- # set +x 00:03:07.631 ************************************ 00:03:07.631 START TEST acl 00:03:07.631 ************************************ 00:03:07.631 22:02:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:07.631 * Looking for test storage... 00:03:07.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:07.631 22:02:02 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:07.631 22:02:02 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:07.631 22:02:02 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:07.631 22:02:02 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:07.631 22:02:02 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:07.631 22:02:02 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:07.631 22:02:02 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:07.631 22:02:02 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.632 22:02:02 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:07.632 22:02:02 -- setup/acl.sh@12 -- # devs=() 00:03:07.632 22:02:02 -- setup/acl.sh@12 -- # declare -a devs 00:03:07.632 22:02:02 -- setup/acl.sh@13 -- # drivers=() 00:03:07.632 22:02:02 -- setup/acl.sh@13 -- # declare -A drivers 00:03:07.632 22:02:02 -- setup/acl.sh@51 -- # setup reset 00:03:07.632 22:02:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.632 22:02:02 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.951 22:02:05 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:10.951 22:02:05 -- setup/acl.sh@16 -- # local dev driver 00:03:10.951 22:02:05 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.951 22:02:05 -- setup/acl.sh@15 -- # setup output status 00:03:10.951 22:02:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.951 22:02:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:13.481 Hugepages 00:03:13.481 node hugesize free / total 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 00:03:13.481 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:13.481 22:02:08 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:13.481 22:02:08 -- setup/acl.sh@20 -- # continue 00:03:13.481 22:02:08 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:13.481 22:02:08 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:13.481 22:02:08 -- setup/acl.sh@54 -- # run_test denied denied 00:03:13.481 22:02:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:13.481 22:02:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:13.481 22:02:08 -- common/autotest_common.sh@10 -- # set +x 00:03:13.481 ************************************ 00:03:13.481 START TEST denied 00:03:13.481 ************************************ 00:03:13.481 22:02:08 -- common/autotest_common.sh@1104 -- # denied 00:03:13.481 22:02:08 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:13.481 22:02:08 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:13.481 22:02:08 -- setup/acl.sh@38 -- # setup output config 00:03:13.481 22:02:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.481 22:02:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.013 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:16.013 22:02:10 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:16.013 22:02:10 -- setup/acl.sh@28 -- # local dev driver 00:03:16.013 22:02:10 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:16.013 22:02:10 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:16.013 22:02:10 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:16.013 22:02:10 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:16.013 22:02:10 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:16.013 22:02:10 -- setup/acl.sh@41 -- # setup reset 00:03:16.013 22:02:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.013 22:02:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.299 00:03:19.299 real 0m6.092s 00:03:19.299 user 0m1.803s 00:03:19.299 sys 0m3.576s 00:03:19.299 22:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.299 22:02:14 -- common/autotest_common.sh@10 -- # set +x 00:03:19.299 ************************************ 00:03:19.299 END TEST denied 00:03:19.299 ************************************ 00:03:19.299 22:02:14 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:19.299 22:02:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:19.299 22:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:19.299 22:02:14 -- common/autotest_common.sh@10 -- # set +x 00:03:19.299 ************************************ 00:03:19.299 START TEST allowed 00:03:19.299 ************************************ 00:03:19.299 22:02:14 -- common/autotest_common.sh@1104 -- # allowed 00:03:19.299 22:02:14 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:19.299 22:02:14 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:19.299 22:02:14 -- setup/acl.sh@45 -- # setup output config 00:03:19.299 22:02:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.299 22:02:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:23.489 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.489 22:02:18 -- setup/acl.sh@47 -- # verify 00:03:23.489 22:02:18 -- setup/acl.sh@28 -- # local dev driver 00:03:23.489 22:02:18 -- setup/acl.sh@48 -- # setup reset 00:03:23.489 22:02:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.489 22:02:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.022 00:03:26.022 real 0m6.601s 00:03:26.022 user 0m2.027s 00:03:26.022 sys 0m3.767s 00:03:26.022 22:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.022 22:02:20 -- common/autotest_common.sh@10 -- # set +x 00:03:26.022 ************************************ 00:03:26.022 END TEST allowed 00:03:26.022 ************************************ 00:03:26.022 00:03:26.022 real 0m18.496s 00:03:26.022 user 0m5.911s 00:03:26.022 sys 0m11.247s 00:03:26.022 22:02:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.022 22:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:26.022 ************************************ 00:03:26.022 END TEST acl 00:03:26.022 ************************************ 00:03:26.022 22:02:21 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:26.022 22:02:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:26.022 22:02:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.022 22:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:26.022 ************************************ 00:03:26.022 START TEST hugepages 00:03:26.022 ************************************ 00:03:26.022 22:02:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:26.022 * Looking for test storage... 00:03:26.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:26.022 22:02:21 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:26.022 22:02:21 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:26.022 22:02:21 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:26.022 22:02:21 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:26.022 22:02:21 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:26.022 22:02:21 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:26.022 22:02:21 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:26.022 22:02:21 -- setup/common.sh@18 -- # local node= 00:03:26.022 22:02:21 -- setup/common.sh@19 -- # local var val 00:03:26.022 22:02:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.022 22:02:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.022 22:02:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.022 22:02:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.022 22:02:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.022 22:02:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.022 22:02:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 167112760 kB' 'MemAvailable: 170351564 kB' 'Buffers: 3896 kB' 'Cached: 15922784 kB' 'SwapCached: 0 kB' 'Active: 12775236 kB' 'Inactive: 3694312 kB' 'Active(anon): 12357280 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546348 kB' 'Mapped: 193608 kB' 'Shmem: 11814412 kB' 'KReclaimable: 541504 kB' 'Slab: 1203212 kB' 'SReclaimable: 541504 kB' 'SUnreclaim: 661708 kB' 'KernelStack: 20864 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 13900956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.022 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.022 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.023 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.023 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.282 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.282 22:02:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # continue 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.283 22:02:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.283 22:02:21 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:26.283 22:02:21 -- setup/common.sh@33 -- # echo 2048 00:03:26.283 22:02:21 -- setup/common.sh@33 -- # return 0 00:03:26.283 22:02:21 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:26.283 22:02:21 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:26.283 22:02:21 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:26.283 22:02:21 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:26.283 22:02:21 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:26.283 22:02:21 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:26.283 22:02:21 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:26.283 22:02:21 -- setup/hugepages.sh@207 -- # get_nodes 00:03:26.283 22:02:21 -- setup/hugepages.sh@27 -- # local node 00:03:26.283 22:02:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.283 22:02:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:26.283 22:02:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.283 22:02:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:26.283 22:02:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.283 22:02:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.283 22:02:21 -- setup/hugepages.sh@208 -- # clear_hp 00:03:26.283 22:02:21 -- setup/hugepages.sh@37 -- # local node hp 00:03:26.283 22:02:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.283 22:02:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.283 22:02:21 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.283 22:02:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.283 22:02:21 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.283 22:02:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:26.283 22:02:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.283 22:02:21 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.283 22:02:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:26.283 22:02:21 -- setup/hugepages.sh@41 -- # echo 0 00:03:26.283 22:02:21 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:26.283 22:02:21 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:26.283 22:02:21 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:26.283 22:02:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:26.283 22:02:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:26.283 22:02:21 -- common/autotest_common.sh@10 -- # set +x 00:03:26.283 ************************************ 00:03:26.283 START TEST default_setup 00:03:26.283 ************************************ 00:03:26.283 22:02:21 -- common/autotest_common.sh@1104 -- # default_setup 00:03:26.283 22:02:21 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:26.284 22:02:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.284 22:02:21 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.284 22:02:21 -- setup/hugepages.sh@51 -- # shift 00:03:26.284 22:02:21 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:26.284 22:02:21 -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.284 22:02:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.284 22:02:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.284 22:02:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.284 22:02:21 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:26.284 22:02:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.284 22:02:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.284 22:02:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.284 22:02:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.284 22:02:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.284 22:02:21 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.284 22:02:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.284 22:02:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.284 22:02:21 -- setup/hugepages.sh@73 -- # return 0 00:03:26.284 22:02:21 -- setup/hugepages.sh@137 -- # setup output 00:03:26.284 22:02:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.284 22:02:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.820 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.820 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:29.759 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:29.759 22:02:24 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:29.759 22:02:24 -- setup/hugepages.sh@89 -- # local node 00:03:29.759 22:02:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.759 22:02:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.759 22:02:24 -- setup/hugepages.sh@92 -- # local surp 00:03:29.759 22:02:24 -- setup/hugepages.sh@93 -- # local resv 00:03:29.759 22:02:24 -- setup/hugepages.sh@94 -- # local anon 00:03:29.759 22:02:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.759 22:02:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.759 22:02:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.759 22:02:24 -- setup/common.sh@18 -- # local node= 00:03:29.760 22:02:24 -- setup/common.sh@19 -- # local var val 00:03:29.760 22:02:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.760 22:02:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.760 22:02:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.760 22:02:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.760 22:02:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.760 22:02:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169263356 kB' 'MemAvailable: 172502096 kB' 'Buffers: 3896 kB' 'Cached: 15922892 kB' 'SwapCached: 0 kB' 'Active: 12791932 kB' 'Inactive: 3694312 kB' 'Active(anon): 12373976 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563108 kB' 'Mapped: 193548 kB' 'Shmem: 11814520 kB' 'KReclaimable: 541376 kB' 'Slab: 1201732 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 660356 kB' 'KernelStack: 20656 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13919488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.760 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.760 22:02:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.761 22:02:24 -- setup/common.sh@32 -- # continue 00:03:29.761 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.023 22:02:24 -- setup/common.sh@33 -- # echo 0 00:03:30.023 22:02:24 -- setup/common.sh@33 -- # return 0 00:03:30.023 22:02:24 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.023 22:02:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.023 22:02:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.023 22:02:24 -- setup/common.sh@18 -- # local node= 00:03:30.023 22:02:24 -- setup/common.sh@19 -- # local var val 00:03:30.023 22:02:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.023 22:02:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.023 22:02:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.023 22:02:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.023 22:02:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.023 22:02:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169267840 kB' 'MemAvailable: 172506580 kB' 'Buffers: 3896 kB' 'Cached: 15922896 kB' 'SwapCached: 0 kB' 'Active: 12791328 kB' 'Inactive: 3694312 kB' 'Active(anon): 12373372 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562416 kB' 'Mapped: 193548 kB' 'Shmem: 11814524 kB' 'KReclaimable: 541376 kB' 'Slab: 1201748 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 660372 kB' 'KernelStack: 20672 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13919500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.023 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.023 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.024 22:02:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.024 22:02:24 -- setup/common.sh@33 -- # echo 0 00:03:30.024 22:02:24 -- setup/common.sh@33 -- # return 0 00:03:30.024 22:02:24 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.024 22:02:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.024 22:02:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.024 22:02:24 -- setup/common.sh@18 -- # local node= 00:03:30.024 22:02:24 -- setup/common.sh@19 -- # local var val 00:03:30.024 22:02:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.024 22:02:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.024 22:02:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.024 22:02:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.024 22:02:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.024 22:02:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.024 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169267344 kB' 'MemAvailable: 172506084 kB' 'Buffers: 3896 kB' 'Cached: 15922908 kB' 'SwapCached: 0 kB' 'Active: 12791328 kB' 'Inactive: 3694312 kB' 'Active(anon): 12373372 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562404 kB' 'Mapped: 193548 kB' 'Shmem: 11814536 kB' 'KReclaimable: 541376 kB' 'Slab: 1201748 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 660372 kB' 'KernelStack: 20672 kB' 'PageTables: 9208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13919516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.025 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.025 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.026 22:02:24 -- setup/common.sh@33 -- # echo 0 00:03:30.026 22:02:24 -- setup/common.sh@33 -- # return 0 00:03:30.026 22:02:24 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.026 22:02:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.026 nr_hugepages=1024 00:03:30.026 22:02:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.026 resv_hugepages=0 00:03:30.026 22:02:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.026 surplus_hugepages=0 00:03:30.026 22:02:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.026 anon_hugepages=0 00:03:30.026 22:02:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.026 22:02:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.026 22:02:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.026 22:02:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.026 22:02:24 -- setup/common.sh@18 -- # local node= 00:03:30.026 22:02:24 -- setup/common.sh@19 -- # local var val 00:03:30.026 22:02:24 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.026 22:02:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.026 22:02:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.026 22:02:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.026 22:02:24 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.026 22:02:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169266628 kB' 'MemAvailable: 172505368 kB' 'Buffers: 3896 kB' 'Cached: 15922920 kB' 'SwapCached: 0 kB' 'Active: 12793236 kB' 'Inactive: 3694312 kB' 'Active(anon): 12375280 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564340 kB' 'Mapped: 194052 kB' 'Shmem: 11814548 kB' 'KReclaimable: 541376 kB' 'Slab: 1201748 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 660372 kB' 'KernelStack: 20688 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13924464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317080 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.026 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.026 22:02:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:24 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:24 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.027 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.027 22:02:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.027 22:02:25 -- setup/common.sh@33 -- # echo 1024 00:03:30.027 22:02:25 -- setup/common.sh@33 -- # return 0 00:03:30.027 22:02:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.027 22:02:25 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.027 22:02:25 -- setup/hugepages.sh@27 -- # local node 00:03:30.027 22:02:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.028 22:02:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.028 22:02:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.028 22:02:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.028 22:02:25 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.028 22:02:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.028 22:02:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.028 22:02:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.028 22:02:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.028 22:02:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.028 22:02:25 -- setup/common.sh@18 -- # local node=0 00:03:30.028 22:02:25 -- setup/common.sh@19 -- # local var val 00:03:30.028 22:02:25 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.028 22:02:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.028 22:02:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.028 22:02:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.028 22:02:25 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.028 22:02:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.028 22:02:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90947636 kB' 'MemUsed: 6667992 kB' 'SwapCached: 0 kB' 'Active: 2975956 kB' 'Inactive: 218172 kB' 'Active(anon): 2814132 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 218172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3057880 kB' 'Mapped: 102004 kB' 'AnonPages: 139708 kB' 'Shmem: 2677884 kB' 'KernelStack: 11432 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349956 kB' 'Slab: 649660 kB' 'SReclaimable: 349956 kB' 'SUnreclaim: 299704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.028 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.028 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # continue 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.029 22:02:25 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.029 22:02:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.029 22:02:25 -- setup/common.sh@33 -- # echo 0 00:03:30.029 22:02:25 -- setup/common.sh@33 -- # return 0 00:03:30.029 22:02:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.029 22:02:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.029 22:02:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.029 22:02:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.029 22:02:25 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.029 node0=1024 expecting 1024 00:03:30.029 22:02:25 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.029 00:03:30.029 real 0m3.838s 00:03:30.029 user 0m1.208s 00:03:30.029 sys 0m1.834s 00:03:30.029 22:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.029 22:02:25 -- common/autotest_common.sh@10 -- # set +x 00:03:30.029 ************************************ 00:03:30.029 END TEST default_setup 00:03:30.029 ************************************ 00:03:30.029 22:02:25 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:30.029 22:02:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.029 22:02:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.029 22:02:25 -- common/autotest_common.sh@10 -- # set +x 00:03:30.029 ************************************ 00:03:30.029 START TEST per_node_1G_alloc 00:03:30.029 ************************************ 00:03:30.029 22:02:25 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:30.029 22:02:25 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:30.029 22:02:25 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:30.029 22:02:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:30.029 22:02:25 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:30.029 22:02:25 -- setup/hugepages.sh@51 -- # shift 00:03:30.029 22:02:25 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:30.029 22:02:25 -- setup/hugepages.sh@52 -- # local node_ids 00:03:30.029 22:02:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.029 22:02:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:30.029 22:02:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:30.029 22:02:25 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:30.029 22:02:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.029 22:02:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:30.029 22:02:25 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.029 22:02:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.029 22:02:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.029 22:02:25 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:30.029 22:02:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.029 22:02:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.029 22:02:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:30.029 22:02:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:30.029 22:02:25 -- setup/hugepages.sh@73 -- # return 0 00:03:30.029 22:02:25 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:30.029 22:02:25 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:30.029 22:02:25 -- setup/hugepages.sh@146 -- # setup output 00:03:30.029 22:02:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.029 22:02:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:32.565 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:32.565 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:32.565 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:32.830 22:02:27 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:32.830 22:02:27 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:32.830 22:02:27 -- setup/hugepages.sh@89 -- # local node 00:03:32.830 22:02:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.830 22:02:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.830 22:02:27 -- setup/hugepages.sh@92 -- # local surp 00:03:32.830 22:02:27 -- setup/hugepages.sh@93 -- # local resv 00:03:32.830 22:02:27 -- setup/hugepages.sh@94 -- # local anon 00:03:32.830 22:02:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.830 22:02:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.830 22:02:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.830 22:02:27 -- setup/common.sh@18 -- # local node= 00:03:32.830 22:02:27 -- setup/common.sh@19 -- # local var val 00:03:32.830 22:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.830 22:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.830 22:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.830 22:02:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.830 22:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.830 22:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169273976 kB' 'MemAvailable: 172512716 kB' 'Buffers: 3896 kB' 'Cached: 15922992 kB' 'SwapCached: 0 kB' 'Active: 12792272 kB' 'Inactive: 3694312 kB' 'Active(anon): 12374316 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562804 kB' 'Mapped: 193576 kB' 'Shmem: 11814620 kB' 'KReclaimable: 541376 kB' 'Slab: 1201140 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659764 kB' 'KernelStack: 20640 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13919852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.830 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.830 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.831 22:02:27 -- setup/common.sh@33 -- # echo 0 00:03:32.831 22:02:27 -- setup/common.sh@33 -- # return 0 00:03:32.831 22:02:27 -- setup/hugepages.sh@97 -- # anon=0 00:03:32.831 22:02:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.831 22:02:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.831 22:02:27 -- setup/common.sh@18 -- # local node= 00:03:32.831 22:02:27 -- setup/common.sh@19 -- # local var val 00:03:32.831 22:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.831 22:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.831 22:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.831 22:02:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.831 22:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.831 22:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169274592 kB' 'MemAvailable: 172513332 kB' 'Buffers: 3896 kB' 'Cached: 15922992 kB' 'SwapCached: 0 kB' 'Active: 12792152 kB' 'Inactive: 3694312 kB' 'Active(anon): 12374196 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562764 kB' 'Mapped: 193552 kB' 'Shmem: 11814620 kB' 'KReclaimable: 541376 kB' 'Slab: 1201100 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659724 kB' 'KernelStack: 20672 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13919864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317256 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.831 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.831 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.832 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.832 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.832 22:02:27 -- setup/common.sh@33 -- # echo 0 00:03:32.832 22:02:27 -- setup/common.sh@33 -- # return 0 00:03:32.832 22:02:27 -- setup/hugepages.sh@99 -- # surp=0 00:03:32.832 22:02:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.832 22:02:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.832 22:02:27 -- setup/common.sh@18 -- # local node= 00:03:32.832 22:02:27 -- setup/common.sh@19 -- # local var val 00:03:32.832 22:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.832 22:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.832 22:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.832 22:02:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.833 22:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.833 22:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169274928 kB' 'MemAvailable: 172513668 kB' 'Buffers: 3896 kB' 'Cached: 15923004 kB' 'SwapCached: 0 kB' 'Active: 12792592 kB' 'Inactive: 3694312 kB' 'Active(anon): 12374636 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563236 kB' 'Mapped: 193552 kB' 'Shmem: 11814632 kB' 'KReclaimable: 541376 kB' 'Slab: 1201100 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659724 kB' 'KernelStack: 20704 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13922660 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.833 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.833 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.834 22:02:27 -- setup/common.sh@33 -- # echo 0 00:03:32.834 22:02:27 -- setup/common.sh@33 -- # return 0 00:03:32.834 22:02:27 -- setup/hugepages.sh@100 -- # resv=0 00:03:32.834 22:02:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.834 nr_hugepages=1024 00:03:32.834 22:02:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.834 resv_hugepages=0 00:03:32.834 22:02:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.834 surplus_hugepages=0 00:03:32.834 22:02:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.834 anon_hugepages=0 00:03:32.834 22:02:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.834 22:02:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.834 22:02:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.834 22:02:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.834 22:02:27 -- setup/common.sh@18 -- # local node= 00:03:32.834 22:02:27 -- setup/common.sh@19 -- # local var val 00:03:32.834 22:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.834 22:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.834 22:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.834 22:02:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.834 22:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.834 22:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169275672 kB' 'MemAvailable: 172514412 kB' 'Buffers: 3896 kB' 'Cached: 15923020 kB' 'SwapCached: 0 kB' 'Active: 12793324 kB' 'Inactive: 3694312 kB' 'Active(anon): 12375368 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563988 kB' 'Mapped: 193552 kB' 'Shmem: 11814648 kB' 'KReclaimable: 541376 kB' 'Slab: 1201100 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659724 kB' 'KernelStack: 20656 kB' 'PageTables: 9192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13922580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.834 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.834 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.835 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.835 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.836 22:02:27 -- setup/common.sh@33 -- # echo 1024 00:03:32.836 22:02:27 -- setup/common.sh@33 -- # return 0 00:03:32.836 22:02:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.836 22:02:27 -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.836 22:02:27 -- setup/hugepages.sh@27 -- # local node 00:03:32.836 22:02:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.836 22:02:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.836 22:02:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.836 22:02:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.836 22:02:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.836 22:02:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.836 22:02:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.836 22:02:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.836 22:02:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.836 22:02:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.836 22:02:27 -- setup/common.sh@18 -- # local node=0 00:03:32.836 22:02:27 -- setup/common.sh@19 -- # local var val 00:03:32.836 22:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.836 22:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.836 22:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.836 22:02:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.836 22:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.836 22:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91996752 kB' 'MemUsed: 5618876 kB' 'SwapCached: 0 kB' 'Active: 2975052 kB' 'Inactive: 218172 kB' 'Active(anon): 2813228 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 218172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3057948 kB' 'Mapped: 101856 kB' 'AnonPages: 138428 kB' 'Shmem: 2677952 kB' 'KernelStack: 11352 kB' 'PageTables: 3400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349956 kB' 'Slab: 649364 kB' 'SReclaimable: 349956 kB' 'SUnreclaim: 299408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.836 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.836 22:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@33 -- # echo 0 00:03:32.837 22:02:27 -- setup/common.sh@33 -- # return 0 00:03:32.837 22:02:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.837 22:02:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.837 22:02:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.837 22:02:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.837 22:02:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.837 22:02:27 -- setup/common.sh@18 -- # local node=1 00:03:32.837 22:02:27 -- setup/common.sh@19 -- # local var val 00:03:32.837 22:02:27 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.837 22:02:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.837 22:02:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.837 22:02:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.837 22:02:27 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.837 22:02:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77281624 kB' 'MemUsed: 16483884 kB' 'SwapCached: 0 kB' 'Active: 9815420 kB' 'Inactive: 3476140 kB' 'Active(anon): 9559288 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3476140 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12868988 kB' 'Mapped: 91696 kB' 'AnonPages: 422652 kB' 'Shmem: 9136716 kB' 'KernelStack: 9336 kB' 'PageTables: 6056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 191420 kB' 'Slab: 551736 kB' 'SReclaimable: 191420 kB' 'SUnreclaim: 360316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.837 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.837 22:02:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # continue 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.838 22:02:27 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.838 22:02:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.838 22:02:27 -- setup/common.sh@33 -- # echo 0 00:03:32.838 22:02:27 -- setup/common.sh@33 -- # return 0 00:03:32.838 22:02:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.838 22:02:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.838 22:02:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.838 22:02:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.838 22:02:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:32.838 node0=512 expecting 512 00:03:32.838 22:02:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.838 22:02:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.838 22:02:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.838 22:02:27 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:32.838 node1=512 expecting 512 00:03:32.838 22:02:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:32.838 00:03:32.838 real 0m2.802s 00:03:32.838 user 0m1.137s 00:03:32.838 sys 0m1.734s 00:03:32.838 22:02:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.838 22:02:27 -- common/autotest_common.sh@10 -- # set +x 00:03:32.838 ************************************ 00:03:32.838 END TEST per_node_1G_alloc 00:03:32.838 ************************************ 00:03:32.838 22:02:27 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:32.838 22:02:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:32.838 22:02:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:32.838 22:02:27 -- common/autotest_common.sh@10 -- # set +x 00:03:32.838 ************************************ 00:03:32.838 START TEST even_2G_alloc 00:03:32.838 ************************************ 00:03:32.838 22:02:27 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:32.838 22:02:27 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:32.838 22:02:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.838 22:02:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.838 22:02:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.838 22:02:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.838 22:02:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.838 22:02:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.838 22:02:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.838 22:02:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.838 22:02:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.838 22:02:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.838 22:02:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.838 22:02:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.838 22:02:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.838 22:02:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.838 22:02:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:32.838 22:02:27 -- setup/hugepages.sh@83 -- # : 512 00:03:32.838 22:02:27 -- setup/hugepages.sh@84 -- # : 1 00:03:32.838 22:02:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.838 22:02:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:32.838 22:02:27 -- setup/hugepages.sh@83 -- # : 0 00:03:32.838 22:02:27 -- setup/hugepages.sh@84 -- # : 0 00:03:32.838 22:02:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.838 22:02:27 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:32.838 22:02:27 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:32.838 22:02:27 -- setup/hugepages.sh@153 -- # setup output 00:03:32.838 22:02:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.838 22:02:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:35.381 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:35.381 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:35.381 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:35.685 22:02:30 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:35.685 22:02:30 -- setup/hugepages.sh@89 -- # local node 00:03:35.685 22:02:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:35.685 22:02:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:35.685 22:02:30 -- setup/hugepages.sh@92 -- # local surp 00:03:35.685 22:02:30 -- setup/hugepages.sh@93 -- # local resv 00:03:35.685 22:02:30 -- setup/hugepages.sh@94 -- # local anon 00:03:35.685 22:02:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.685 22:02:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:35.685 22:02:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.685 22:02:30 -- setup/common.sh@18 -- # local node= 00:03:35.685 22:02:30 -- setup/common.sh@19 -- # local var val 00:03:35.685 22:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.685 22:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.685 22:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.685 22:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.685 22:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.685 22:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.685 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.685 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169284236 kB' 'MemAvailable: 172522976 kB' 'Buffers: 3896 kB' 'Cached: 15923116 kB' 'SwapCached: 0 kB' 'Active: 12787332 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369376 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557560 kB' 'Mapped: 192660 kB' 'Shmem: 11814744 kB' 'KReclaimable: 541376 kB' 'Slab: 1200880 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659504 kB' 'KernelStack: 20544 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13896336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317080 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.686 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.686 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.687 22:02:30 -- setup/common.sh@33 -- # echo 0 00:03:35.687 22:02:30 -- setup/common.sh@33 -- # return 0 00:03:35.687 22:02:30 -- setup/hugepages.sh@97 -- # anon=0 00:03:35.687 22:02:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.687 22:02:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.687 22:02:30 -- setup/common.sh@18 -- # local node= 00:03:35.687 22:02:30 -- setup/common.sh@19 -- # local var val 00:03:35.687 22:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.687 22:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.687 22:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.687 22:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.687 22:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.687 22:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169286164 kB' 'MemAvailable: 172524904 kB' 'Buffers: 3896 kB' 'Cached: 15923120 kB' 'SwapCached: 0 kB' 'Active: 12786952 kB' 'Inactive: 3694312 kB' 'Active(anon): 12368996 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557700 kB' 'Mapped: 192544 kB' 'Shmem: 11814748 kB' 'KReclaimable: 541376 kB' 'Slab: 1200768 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659392 kB' 'KernelStack: 20544 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13900156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317032 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.687 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.687 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.688 22:02:30 -- setup/common.sh@33 -- # echo 0 00:03:35.688 22:02:30 -- setup/common.sh@33 -- # return 0 00:03:35.688 22:02:30 -- setup/hugepages.sh@99 -- # surp=0 00:03:35.688 22:02:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.688 22:02:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.688 22:02:30 -- setup/common.sh@18 -- # local node= 00:03:35.688 22:02:30 -- setup/common.sh@19 -- # local var val 00:03:35.688 22:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.688 22:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.688 22:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.688 22:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.688 22:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.688 22:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169286168 kB' 'MemAvailable: 172524908 kB' 'Buffers: 3896 kB' 'Cached: 15923132 kB' 'SwapCached: 0 kB' 'Active: 12786984 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369028 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557772 kB' 'Mapped: 192544 kB' 'Shmem: 11814760 kB' 'KReclaimable: 541376 kB' 'Slab: 1200768 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659392 kB' 'KernelStack: 20640 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13899268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.688 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.688 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.689 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.689 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.690 22:02:30 -- setup/common.sh@33 -- # echo 0 00:03:35.690 22:02:30 -- setup/common.sh@33 -- # return 0 00:03:35.690 22:02:30 -- setup/hugepages.sh@100 -- # resv=0 00:03:35.690 22:02:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:35.690 nr_hugepages=1024 00:03:35.690 22:02:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.690 resv_hugepages=0 00:03:35.690 22:02:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.690 surplus_hugepages=0 00:03:35.690 22:02:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.690 anon_hugepages=0 00:03:35.690 22:02:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.690 22:02:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:35.690 22:02:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.690 22:02:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.690 22:02:30 -- setup/common.sh@18 -- # local node= 00:03:35.690 22:02:30 -- setup/common.sh@19 -- # local var val 00:03:35.690 22:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.690 22:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.690 22:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.690 22:02:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.690 22:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.690 22:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169286440 kB' 'MemAvailable: 172525180 kB' 'Buffers: 3896 kB' 'Cached: 15923144 kB' 'SwapCached: 0 kB' 'Active: 12787144 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369188 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557896 kB' 'Mapped: 192544 kB' 'Shmem: 11814772 kB' 'KReclaimable: 541376 kB' 'Slab: 1200768 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659392 kB' 'KernelStack: 20800 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13900792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317208 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.690 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.690 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.691 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.691 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.692 22:02:30 -- setup/common.sh@33 -- # echo 1024 00:03:35.692 22:02:30 -- setup/common.sh@33 -- # return 0 00:03:35.692 22:02:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.692 22:02:30 -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.692 22:02:30 -- setup/hugepages.sh@27 -- # local node 00:03:35.692 22:02:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.692 22:02:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.692 22:02:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.692 22:02:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.692 22:02:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.692 22:02:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.692 22:02:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.692 22:02:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.692 22:02:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.692 22:02:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.692 22:02:30 -- setup/common.sh@18 -- # local node=0 00:03:35.692 22:02:30 -- setup/common.sh@19 -- # local var val 00:03:35.692 22:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.692 22:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.692 22:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.692 22:02:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.692 22:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.692 22:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91996736 kB' 'MemUsed: 5618892 kB' 'SwapCached: 0 kB' 'Active: 2972904 kB' 'Inactive: 218172 kB' 'Active(anon): 2811080 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 218172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3058028 kB' 'Mapped: 101512 kB' 'AnonPages: 136288 kB' 'Shmem: 2678032 kB' 'KernelStack: 11336 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349956 kB' 'Slab: 649412 kB' 'SReclaimable: 349956 kB' 'SUnreclaim: 299456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.692 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.692 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@33 -- # echo 0 00:03:35.693 22:02:30 -- setup/common.sh@33 -- # return 0 00:03:35.693 22:02:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.693 22:02:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.693 22:02:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.693 22:02:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:35.693 22:02:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.693 22:02:30 -- setup/common.sh@18 -- # local node=1 00:03:35.693 22:02:30 -- setup/common.sh@19 -- # local var val 00:03:35.693 22:02:30 -- setup/common.sh@20 -- # local mem_f mem 00:03:35.693 22:02:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.693 22:02:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:35.693 22:02:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:35.693 22:02:30 -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.693 22:02:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77297240 kB' 'MemUsed: 16468268 kB' 'SwapCached: 0 kB' 'Active: 9814044 kB' 'Inactive: 3476140 kB' 'Active(anon): 9557912 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3476140 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12869032 kB' 'Mapped: 91032 kB' 'AnonPages: 421356 kB' 'Shmem: 9136760 kB' 'KernelStack: 9288 kB' 'PageTables: 5504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 191420 kB' 'Slab: 551228 kB' 'SReclaimable: 191420 kB' 'SUnreclaim: 359808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.693 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.693 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # continue 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # IFS=': ' 00:03:35.694 22:02:30 -- setup/common.sh@31 -- # read -r var val _ 00:03:35.694 22:02:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.694 22:02:30 -- setup/common.sh@33 -- # echo 0 00:03:35.694 22:02:30 -- setup/common.sh@33 -- # return 0 00:03:35.694 22:02:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.694 22:02:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.694 22:02:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.694 22:02:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.695 22:02:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.695 node0=512 expecting 512 00:03:35.695 22:02:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.695 22:02:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.695 22:02:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.695 22:02:30 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:35.695 node1=512 expecting 512 00:03:35.695 22:02:30 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:35.695 00:03:35.695 real 0m2.844s 00:03:35.695 user 0m1.161s 00:03:35.695 sys 0m1.752s 00:03:35.695 22:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.695 22:02:30 -- common/autotest_common.sh@10 -- # set +x 00:03:35.695 ************************************ 00:03:35.695 END TEST even_2G_alloc 00:03:35.695 ************************************ 00:03:35.695 22:02:30 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:35.695 22:02:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.695 22:02:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.695 22:02:30 -- common/autotest_common.sh@10 -- # set +x 00:03:35.695 ************************************ 00:03:35.695 START TEST odd_alloc 00:03:35.695 ************************************ 00:03:35.695 22:02:30 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:35.695 22:02:30 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:35.695 22:02:30 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:35.695 22:02:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:35.695 22:02:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.695 22:02:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:35.695 22:02:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:35.695 22:02:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:35.695 22:02:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.695 22:02:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:35.695 22:02:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.695 22:02:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.695 22:02:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.695 22:02:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:35.695 22:02:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:35.695 22:02:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.695 22:02:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:35.695 22:02:30 -- setup/hugepages.sh@83 -- # : 513 00:03:35.695 22:02:30 -- setup/hugepages.sh@84 -- # : 1 00:03:35.695 22:02:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.695 22:02:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:35.695 22:02:30 -- setup/hugepages.sh@83 -- # : 0 00:03:35.695 22:02:30 -- setup/hugepages.sh@84 -- # : 0 00:03:35.695 22:02:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:35.695 22:02:30 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:35.695 22:02:30 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:35.695 22:02:30 -- setup/hugepages.sh@160 -- # setup output 00:03:35.695 22:02:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.695 22:02:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:38.236 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.236 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:38.236 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:38.501 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:38.501 22:02:33 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:38.501 22:02:33 -- setup/hugepages.sh@89 -- # local node 00:03:38.501 22:02:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:38.501 22:02:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:38.501 22:02:33 -- setup/hugepages.sh@92 -- # local surp 00:03:38.501 22:02:33 -- setup/hugepages.sh@93 -- # local resv 00:03:38.501 22:02:33 -- setup/hugepages.sh@94 -- # local anon 00:03:38.501 22:02:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:38.501 22:02:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:38.501 22:02:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:38.501 22:02:33 -- setup/common.sh@18 -- # local node= 00:03:38.501 22:02:33 -- setup/common.sh@19 -- # local var val 00:03:38.501 22:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.501 22:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.501 22:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.501 22:02:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.501 22:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.501 22:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169292760 kB' 'MemAvailable: 172531500 kB' 'Buffers: 3896 kB' 'Cached: 15923228 kB' 'SwapCached: 0 kB' 'Active: 12788408 kB' 'Inactive: 3694312 kB' 'Active(anon): 12370452 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558516 kB' 'Mapped: 192704 kB' 'Shmem: 11814856 kB' 'KReclaimable: 541376 kB' 'Slab: 1199944 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 658568 kB' 'KernelStack: 21136 kB' 'PageTables: 10252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 13900780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317256 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.501 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.501 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.502 22:02:33 -- setup/common.sh@33 -- # echo 0 00:03:38.502 22:02:33 -- setup/common.sh@33 -- # return 0 00:03:38.502 22:02:33 -- setup/hugepages.sh@97 -- # anon=0 00:03:38.502 22:02:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:38.502 22:02:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.502 22:02:33 -- setup/common.sh@18 -- # local node= 00:03:38.502 22:02:33 -- setup/common.sh@19 -- # local var val 00:03:38.502 22:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.502 22:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.502 22:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.502 22:02:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.502 22:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.502 22:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169301524 kB' 'MemAvailable: 172540264 kB' 'Buffers: 3896 kB' 'Cached: 15923236 kB' 'SwapCached: 0 kB' 'Active: 12788528 kB' 'Inactive: 3694312 kB' 'Active(anon): 12370572 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558048 kB' 'Mapped: 192684 kB' 'Shmem: 11814864 kB' 'KReclaimable: 541376 kB' 'Slab: 1199788 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 658412 kB' 'KernelStack: 21056 kB' 'PageTables: 9844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 13900928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.502 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.502 22:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.503 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.503 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.504 22:02:33 -- setup/common.sh@33 -- # echo 0 00:03:38.504 22:02:33 -- setup/common.sh@33 -- # return 0 00:03:38.504 22:02:33 -- setup/hugepages.sh@99 -- # surp=0 00:03:38.504 22:02:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.504 22:02:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.504 22:02:33 -- setup/common.sh@18 -- # local node= 00:03:38.504 22:02:33 -- setup/common.sh@19 -- # local var val 00:03:38.504 22:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.504 22:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.504 22:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.504 22:02:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.504 22:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.504 22:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169300096 kB' 'MemAvailable: 172538836 kB' 'Buffers: 3896 kB' 'Cached: 15923252 kB' 'SwapCached: 0 kB' 'Active: 12787820 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369864 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558868 kB' 'Mapped: 192608 kB' 'Shmem: 11814880 kB' 'KReclaimable: 541376 kB' 'Slab: 1199724 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 658348 kB' 'KernelStack: 20928 kB' 'PageTables: 10140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 13901316 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.504 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.504 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.505 22:02:33 -- setup/common.sh@33 -- # echo 0 00:03:38.505 22:02:33 -- setup/common.sh@33 -- # return 0 00:03:38.505 22:02:33 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.505 22:02:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:38.505 nr_hugepages=1025 00:03:38.505 22:02:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.505 resv_hugepages=0 00:03:38.505 22:02:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.505 surplus_hugepages=0 00:03:38.505 22:02:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.505 anon_hugepages=0 00:03:38.505 22:02:33 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:38.505 22:02:33 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:38.505 22:02:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.505 22:02:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.505 22:02:33 -- setup/common.sh@18 -- # local node= 00:03:38.505 22:02:33 -- setup/common.sh@19 -- # local var val 00:03:38.505 22:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.505 22:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.505 22:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.505 22:02:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.505 22:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.505 22:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169297868 kB' 'MemAvailable: 172536608 kB' 'Buffers: 3896 kB' 'Cached: 15923268 kB' 'SwapCached: 0 kB' 'Active: 12787600 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369644 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558060 kB' 'Mapped: 191976 kB' 'Shmem: 11814896 kB' 'KReclaimable: 541376 kB' 'Slab: 1199596 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 658220 kB' 'KernelStack: 20800 kB' 'PageTables: 9712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 13902012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.505 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.505 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.506 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.506 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.507 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.507 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.769 22:02:33 -- setup/common.sh@33 -- # echo 1025 00:03:38.769 22:02:33 -- setup/common.sh@33 -- # return 0 00:03:38.769 22:02:33 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:38.769 22:02:33 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.769 22:02:33 -- setup/hugepages.sh@27 -- # local node 00:03:38.769 22:02:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.769 22:02:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.769 22:02:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.769 22:02:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:38.769 22:02:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.769 22:02:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.769 22:02:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.769 22:02:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.769 22:02:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.769 22:02:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.769 22:02:33 -- setup/common.sh@18 -- # local node=0 00:03:38.769 22:02:33 -- setup/common.sh@19 -- # local var val 00:03:38.769 22:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.769 22:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.769 22:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.769 22:02:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.769 22:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.769 22:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92007160 kB' 'MemUsed: 5608468 kB' 'SwapCached: 0 kB' 'Active: 2973452 kB' 'Inactive: 218172 kB' 'Active(anon): 2811628 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 218172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3058068 kB' 'Mapped: 101140 kB' 'AnonPages: 136676 kB' 'Shmem: 2678072 kB' 'KernelStack: 11672 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349956 kB' 'Slab: 648800 kB' 'SReclaimable: 349956 kB' 'SUnreclaim: 298844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.769 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.769 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@33 -- # echo 0 00:03:38.770 22:02:33 -- setup/common.sh@33 -- # return 0 00:03:38.770 22:02:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.770 22:02:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.770 22:02:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.770 22:02:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.770 22:02:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.770 22:02:33 -- setup/common.sh@18 -- # local node=1 00:03:38.770 22:02:33 -- setup/common.sh@19 -- # local var val 00:03:38.770 22:02:33 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.770 22:02:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.770 22:02:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.770 22:02:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.770 22:02:33 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.770 22:02:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.770 22:02:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77280636 kB' 'MemUsed: 16484872 kB' 'SwapCached: 0 kB' 'Active: 9820224 kB' 'Inactive: 3476140 kB' 'Active(anon): 9564092 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3476140 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12869124 kB' 'Mapped: 91532 kB' 'AnonPages: 427500 kB' 'Shmem: 9136852 kB' 'KernelStack: 9288 kB' 'PageTables: 5576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 191420 kB' 'Slab: 551300 kB' 'SReclaimable: 191420 kB' 'SUnreclaim: 359880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.770 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.770 22:02:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # continue 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.771 22:02:33 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.771 22:02:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.771 22:02:33 -- setup/common.sh@33 -- # echo 0 00:03:38.771 22:02:33 -- setup/common.sh@33 -- # return 0 00:03:38.771 22:02:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.771 22:02:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.771 22:02:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.771 22:02:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.771 22:02:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:38.771 node0=512 expecting 513 00:03:38.771 22:02:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.771 22:02:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.771 22:02:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.771 22:02:33 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:38.771 node1=513 expecting 512 00:03:38.771 22:02:33 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:38.771 00:03:38.771 real 0m2.905s 00:03:38.771 user 0m1.217s 00:03:38.771 sys 0m1.761s 00:03:38.771 22:02:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.771 22:02:33 -- common/autotest_common.sh@10 -- # set +x 00:03:38.771 ************************************ 00:03:38.771 END TEST odd_alloc 00:03:38.771 ************************************ 00:03:38.771 22:02:33 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:38.771 22:02:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:38.771 22:02:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:38.771 22:02:33 -- common/autotest_common.sh@10 -- # set +x 00:03:38.771 ************************************ 00:03:38.771 START TEST custom_alloc 00:03:38.771 ************************************ 00:03:38.771 22:02:33 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:38.771 22:02:33 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:38.771 22:02:33 -- setup/hugepages.sh@169 -- # local node 00:03:38.771 22:02:33 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:38.771 22:02:33 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:38.771 22:02:33 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:38.771 22:02:33 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:38.771 22:02:33 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:38.771 22:02:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.771 22:02:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.771 22:02:33 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:38.771 22:02:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.771 22:02:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.771 22:02:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.771 22:02:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:38.771 22:02:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.771 22:02:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.771 22:02:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.771 22:02:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.771 22:02:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:38.771 22:02:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.771 22:02:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:38.771 22:02:33 -- setup/hugepages.sh@83 -- # : 256 00:03:38.771 22:02:33 -- setup/hugepages.sh@84 -- # : 1 00:03:38.771 22:02:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.771 22:02:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:38.771 22:02:33 -- setup/hugepages.sh@83 -- # : 0 00:03:38.772 22:02:33 -- setup/hugepages.sh@84 -- # : 0 00:03:38.772 22:02:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:38.772 22:02:33 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:38.772 22:02:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.772 22:02:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.772 22:02:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.772 22:02:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.772 22:02:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.772 22:02:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.772 22:02:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.772 22:02:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.772 22:02:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.772 22:02:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:38.772 22:02:33 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:38.772 22:02:33 -- setup/hugepages.sh@78 -- # return 0 00:03:38.772 22:02:33 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:38.772 22:02:33 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:38.772 22:02:33 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:38.772 22:02:33 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:38.772 22:02:33 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:38.772 22:02:33 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:38.772 22:02:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.772 22:02:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.772 22:02:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.772 22:02:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.772 22:02:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.772 22:02:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.772 22:02:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:38.772 22:02:33 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:38.772 22:02:33 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:38.772 22:02:33 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:38.772 22:02:33 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:38.772 22:02:33 -- setup/hugepages.sh@78 -- # return 0 00:03:38.772 22:02:33 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:38.772 22:02:33 -- setup/hugepages.sh@187 -- # setup output 00:03:38.772 22:02:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.772 22:02:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.311 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:41.311 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:41.311 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:41.574 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:41.574 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:41.574 22:02:36 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:41.574 22:02:36 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:41.574 22:02:36 -- setup/hugepages.sh@89 -- # local node 00:03:41.574 22:02:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.574 22:02:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.574 22:02:36 -- setup/hugepages.sh@92 -- # local surp 00:03:41.574 22:02:36 -- setup/hugepages.sh@93 -- # local resv 00:03:41.574 22:02:36 -- setup/hugepages.sh@94 -- # local anon 00:03:41.574 22:02:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.574 22:02:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.574 22:02:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.574 22:02:36 -- setup/common.sh@18 -- # local node= 00:03:41.574 22:02:36 -- setup/common.sh@19 -- # local var val 00:03:41.574 22:02:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.574 22:02:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.574 22:02:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.574 22:02:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.574 22:02:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.574 22:02:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168252500 kB' 'MemAvailable: 171491240 kB' 'Buffers: 3896 kB' 'Cached: 15923352 kB' 'SwapCached: 0 kB' 'Active: 12787344 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369388 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557636 kB' 'Mapped: 192588 kB' 'Shmem: 11814980 kB' 'KReclaimable: 541376 kB' 'Slab: 1200316 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 658940 kB' 'KernelStack: 20576 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 13897360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.574 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.574 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.575 22:02:36 -- setup/common.sh@33 -- # echo 0 00:03:41.575 22:02:36 -- setup/common.sh@33 -- # return 0 00:03:41.575 22:02:36 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.575 22:02:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.575 22:02:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.575 22:02:36 -- setup/common.sh@18 -- # local node= 00:03:41.575 22:02:36 -- setup/common.sh@19 -- # local var val 00:03:41.575 22:02:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.575 22:02:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.575 22:02:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.575 22:02:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.575 22:02:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.575 22:02:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168254708 kB' 'MemAvailable: 171493448 kB' 'Buffers: 3896 kB' 'Cached: 15923356 kB' 'SwapCached: 0 kB' 'Active: 12787024 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369068 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557344 kB' 'Mapped: 192560 kB' 'Shmem: 11814984 kB' 'KReclaimable: 541376 kB' 'Slab: 1200396 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659020 kB' 'KernelStack: 20576 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 13897372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.575 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.575 22:02:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.576 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.576 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.576 22:02:36 -- setup/common.sh@33 -- # echo 0 00:03:41.576 22:02:36 -- setup/common.sh@33 -- # return 0 00:03:41.576 22:02:36 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.576 22:02:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.576 22:02:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.576 22:02:36 -- setup/common.sh@18 -- # local node= 00:03:41.577 22:02:36 -- setup/common.sh@19 -- # local var val 00:03:41.577 22:02:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.577 22:02:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.577 22:02:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.577 22:02:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.577 22:02:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.577 22:02:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168254820 kB' 'MemAvailable: 171493560 kB' 'Buffers: 3896 kB' 'Cached: 15923368 kB' 'SwapCached: 0 kB' 'Active: 12787040 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369084 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557344 kB' 'Mapped: 192560 kB' 'Shmem: 11814996 kB' 'KReclaimable: 541376 kB' 'Slab: 1200396 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659020 kB' 'KernelStack: 20576 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 13897388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.577 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.577 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.578 22:02:36 -- setup/common.sh@33 -- # echo 0 00:03:41.578 22:02:36 -- setup/common.sh@33 -- # return 0 00:03:41.578 22:02:36 -- setup/hugepages.sh@100 -- # resv=0 00:03:41.578 22:02:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:41.578 nr_hugepages=1536 00:03:41.578 22:02:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:41.578 resv_hugepages=0 00:03:41.578 22:02:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:41.578 surplus_hugepages=0 00:03:41.578 22:02:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:41.578 anon_hugepages=0 00:03:41.578 22:02:36 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:41.578 22:02:36 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:41.578 22:02:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:41.578 22:02:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:41.578 22:02:36 -- setup/common.sh@18 -- # local node= 00:03:41.578 22:02:36 -- setup/common.sh@19 -- # local var val 00:03:41.578 22:02:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.578 22:02:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.578 22:02:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.578 22:02:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.578 22:02:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.578 22:02:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168254820 kB' 'MemAvailable: 171493560 kB' 'Buffers: 3896 kB' 'Cached: 15923392 kB' 'SwapCached: 0 kB' 'Active: 12786708 kB' 'Inactive: 3694312 kB' 'Active(anon): 12368752 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556968 kB' 'Mapped: 192560 kB' 'Shmem: 11815020 kB' 'KReclaimable: 541376 kB' 'Slab: 1200396 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659020 kB' 'KernelStack: 20560 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 13897400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.578 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.578 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.579 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.579 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:41.580 22:02:36 -- setup/common.sh@33 -- # echo 1536 00:03:41.580 22:02:36 -- setup/common.sh@33 -- # return 0 00:03:41.580 22:02:36 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:41.580 22:02:36 -- setup/hugepages.sh@112 -- # get_nodes 00:03:41.580 22:02:36 -- setup/hugepages.sh@27 -- # local node 00:03:41.580 22:02:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.580 22:02:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:41.580 22:02:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:41.580 22:02:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:41.580 22:02:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:41.580 22:02:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:41.580 22:02:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.580 22:02:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.580 22:02:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:41.580 22:02:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.580 22:02:36 -- setup/common.sh@18 -- # local node=0 00:03:41.580 22:02:36 -- setup/common.sh@19 -- # local var val 00:03:41.580 22:02:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.580 22:02:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.580 22:02:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:41.580 22:02:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:41.580 22:02:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.580 22:02:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92002824 kB' 'MemUsed: 5612804 kB' 'SwapCached: 0 kB' 'Active: 2973716 kB' 'Inactive: 218172 kB' 'Active(anon): 2811892 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 218172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3058116 kB' 'Mapped: 101528 kB' 'AnonPages: 137028 kB' 'Shmem: 2678120 kB' 'KernelStack: 11384 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349956 kB' 'Slab: 648996 kB' 'SReclaimable: 349956 kB' 'SUnreclaim: 299040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.580 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.580 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.581 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.581 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@33 -- # echo 0 00:03:41.582 22:02:36 -- setup/common.sh@33 -- # return 0 00:03:41.582 22:02:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.582 22:02:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:41.582 22:02:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:41.582 22:02:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:41.582 22:02:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.582 22:02:36 -- setup/common.sh@18 -- # local node=1 00:03:41.582 22:02:36 -- setup/common.sh@19 -- # local var val 00:03:41.582 22:02:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.582 22:02:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.582 22:02:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:41.582 22:02:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:41.582 22:02:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.582 22:02:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.582 22:02:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 76251996 kB' 'MemUsed: 17513512 kB' 'SwapCached: 0 kB' 'Active: 9813340 kB' 'Inactive: 3476140 kB' 'Active(anon): 9557208 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3476140 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12869176 kB' 'Mapped: 91032 kB' 'AnonPages: 420312 kB' 'Shmem: 9136904 kB' 'KernelStack: 9192 kB' 'PageTables: 5220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 191420 kB' 'Slab: 551400 kB' 'SReclaimable: 191420 kB' 'SUnreclaim: 359980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.582 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.582 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.844 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.844 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # continue 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.845 22:02:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.845 22:02:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.845 22:02:36 -- setup/common.sh@33 -- # echo 0 00:03:41.845 22:02:36 -- setup/common.sh@33 -- # return 0 00:03:41.845 22:02:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:41.845 22:02:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.845 22:02:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.845 22:02:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.845 22:02:36 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:41.845 node0=512 expecting 512 00:03:41.845 22:02:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:41.845 22:02:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:41.845 22:02:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:41.845 22:02:36 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:41.845 node1=1024 expecting 1024 00:03:41.845 22:02:36 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:41.845 00:03:41.845 real 0m2.992s 00:03:41.845 user 0m1.245s 00:03:41.845 sys 0m1.820s 00:03:41.845 22:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.845 22:02:36 -- common/autotest_common.sh@10 -- # set +x 00:03:41.845 ************************************ 00:03:41.845 END TEST custom_alloc 00:03:41.845 ************************************ 00:03:41.845 22:02:36 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:41.845 22:02:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.845 22:02:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.845 22:02:36 -- common/autotest_common.sh@10 -- # set +x 00:03:41.845 ************************************ 00:03:41.845 START TEST no_shrink_alloc 00:03:41.845 ************************************ 00:03:41.845 22:02:36 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:41.845 22:02:36 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:41.845 22:02:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:41.845 22:02:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:41.845 22:02:36 -- setup/hugepages.sh@51 -- # shift 00:03:41.845 22:02:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:41.845 22:02:36 -- setup/hugepages.sh@52 -- # local node_ids 00:03:41.845 22:02:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:41.845 22:02:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:41.845 22:02:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:41.845 22:02:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:41.845 22:02:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:41.845 22:02:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:41.845 22:02:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:41.845 22:02:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:41.845 22:02:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:41.845 22:02:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:41.845 22:02:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:41.845 22:02:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:41.845 22:02:36 -- setup/hugepages.sh@73 -- # return 0 00:03:41.845 22:02:36 -- setup/hugepages.sh@198 -- # setup output 00:03:41.845 22:02:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.845 22:02:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:44.390 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:44.390 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:44.390 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:44.653 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:44.653 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:44.653 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:44.653 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:44.653 22:02:39 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:44.653 22:02:39 -- setup/hugepages.sh@89 -- # local node 00:03:44.653 22:02:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:44.653 22:02:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:44.653 22:02:39 -- setup/hugepages.sh@92 -- # local surp 00:03:44.653 22:02:39 -- setup/hugepages.sh@93 -- # local resv 00:03:44.653 22:02:39 -- setup/hugepages.sh@94 -- # local anon 00:03:44.653 22:02:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:44.653 22:02:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:44.653 22:02:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:44.653 22:02:39 -- setup/common.sh@18 -- # local node= 00:03:44.653 22:02:39 -- setup/common.sh@19 -- # local var val 00:03:44.653 22:02:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.653 22:02:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.653 22:02:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.653 22:02:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.653 22:02:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.653 22:02:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.653 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.653 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169285384 kB' 'MemAvailable: 172524124 kB' 'Buffers: 3896 kB' 'Cached: 15923476 kB' 'SwapCached: 0 kB' 'Active: 12788416 kB' 'Inactive: 3694312 kB' 'Active(anon): 12370460 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558088 kB' 'Mapped: 192700 kB' 'Shmem: 11815104 kB' 'KReclaimable: 541376 kB' 'Slab: 1200948 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659572 kB' 'KernelStack: 20576 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13898008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.654 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.654 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:44.655 22:02:39 -- setup/common.sh@33 -- # echo 0 00:03:44.655 22:02:39 -- setup/common.sh@33 -- # return 0 00:03:44.655 22:02:39 -- setup/hugepages.sh@97 -- # anon=0 00:03:44.655 22:02:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:44.655 22:02:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.655 22:02:39 -- setup/common.sh@18 -- # local node= 00:03:44.655 22:02:39 -- setup/common.sh@19 -- # local var val 00:03:44.655 22:02:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.655 22:02:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.655 22:02:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.655 22:02:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.655 22:02:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.655 22:02:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169287388 kB' 'MemAvailable: 172526128 kB' 'Buffers: 3896 kB' 'Cached: 15923480 kB' 'SwapCached: 0 kB' 'Active: 12788176 kB' 'Inactive: 3694312 kB' 'Active(anon): 12370220 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557852 kB' 'Mapped: 192660 kB' 'Shmem: 11815108 kB' 'KReclaimable: 541376 kB' 'Slab: 1200948 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659572 kB' 'KernelStack: 20576 kB' 'PageTables: 8744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13898020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.655 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.655 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.656 22:02:39 -- setup/common.sh@33 -- # echo 0 00:03:44.656 22:02:39 -- setup/common.sh@33 -- # return 0 00:03:44.656 22:02:39 -- setup/hugepages.sh@99 -- # surp=0 00:03:44.656 22:02:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:44.656 22:02:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:44.656 22:02:39 -- setup/common.sh@18 -- # local node= 00:03:44.656 22:02:39 -- setup/common.sh@19 -- # local var val 00:03:44.656 22:02:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.656 22:02:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.656 22:02:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.656 22:02:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.656 22:02:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.656 22:02:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169287528 kB' 'MemAvailable: 172526268 kB' 'Buffers: 3896 kB' 'Cached: 15923492 kB' 'SwapCached: 0 kB' 'Active: 12787704 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369748 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557832 kB' 'Mapped: 192576 kB' 'Shmem: 11815120 kB' 'KReclaimable: 541376 kB' 'Slab: 1200936 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659560 kB' 'KernelStack: 20576 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13898036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.656 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.656 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.657 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.657 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:44.657 22:02:39 -- setup/common.sh@33 -- # echo 0 00:03:44.657 22:02:39 -- setup/common.sh@33 -- # return 0 00:03:44.657 22:02:39 -- setup/hugepages.sh@100 -- # resv=0 00:03:44.657 22:02:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:44.657 nr_hugepages=1024 00:03:44.657 22:02:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:44.658 resv_hugepages=0 00:03:44.658 22:02:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:44.658 surplus_hugepages=0 00:03:44.658 22:02:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:44.658 anon_hugepages=0 00:03:44.658 22:02:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.658 22:02:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:44.658 22:02:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:44.658 22:02:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:44.658 22:02:39 -- setup/common.sh@18 -- # local node= 00:03:44.658 22:02:39 -- setup/common.sh@19 -- # local var val 00:03:44.658 22:02:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.658 22:02:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.658 22:02:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:44.658 22:02:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:44.658 22:02:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.658 22:02:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169287848 kB' 'MemAvailable: 172526588 kB' 'Buffers: 3896 kB' 'Cached: 15923504 kB' 'SwapCached: 0 kB' 'Active: 12787700 kB' 'Inactive: 3694312 kB' 'Active(anon): 12369744 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557824 kB' 'Mapped: 192576 kB' 'Shmem: 11815132 kB' 'KReclaimable: 541376 kB' 'Slab: 1200936 kB' 'SReclaimable: 541376 kB' 'SUnreclaim: 659560 kB' 'KernelStack: 20576 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13898052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.658 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.658 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:44.659 22:02:39 -- setup/common.sh@33 -- # echo 1024 00:03:44.659 22:02:39 -- setup/common.sh@33 -- # return 0 00:03:44.659 22:02:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:44.659 22:02:39 -- setup/hugepages.sh@112 -- # get_nodes 00:03:44.659 22:02:39 -- setup/hugepages.sh@27 -- # local node 00:03:44.659 22:02:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.659 22:02:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:44.659 22:02:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:44.659 22:02:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:44.659 22:02:39 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:44.659 22:02:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:44.659 22:02:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:44.659 22:02:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:44.659 22:02:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:44.659 22:02:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:44.659 22:02:39 -- setup/common.sh@18 -- # local node=0 00:03:44.659 22:02:39 -- setup/common.sh@19 -- # local var val 00:03:44.659 22:02:39 -- setup/common.sh@20 -- # local mem_f mem 00:03:44.659 22:02:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:44.659 22:02:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:44.659 22:02:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:44.659 22:02:39 -- setup/common.sh@28 -- # mapfile -t mem 00:03:44.659 22:02:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:44.659 22:02:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90943964 kB' 'MemUsed: 6671664 kB' 'SwapCached: 0 kB' 'Active: 2973948 kB' 'Inactive: 218172 kB' 'Active(anon): 2812124 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 218172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3058216 kB' 'Mapped: 101540 kB' 'AnonPages: 137064 kB' 'Shmem: 2678220 kB' 'KernelStack: 11368 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349956 kB' 'Slab: 649344 kB' 'SReclaimable: 349956 kB' 'SUnreclaim: 299388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.659 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.659 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # continue 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # IFS=': ' 00:03:44.660 22:02:39 -- setup/common.sh@31 -- # read -r var val _ 00:03:44.660 22:02:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:44.660 22:02:39 -- setup/common.sh@33 -- # echo 0 00:03:44.660 22:02:39 -- setup/common.sh@33 -- # return 0 00:03:44.660 22:02:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:44.660 22:02:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:44.660 22:02:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:44.660 22:02:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:44.660 22:02:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:44.660 node0=1024 expecting 1024 00:03:44.660 22:02:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:44.660 22:02:39 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:44.660 22:02:39 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:44.660 22:02:39 -- setup/hugepages.sh@202 -- # setup output 00:03:44.660 22:02:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.660 22:02:39 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:47.957 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:47.957 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.957 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.957 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:47.957 22:02:42 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:47.957 22:02:42 -- setup/hugepages.sh@89 -- # local node 00:03:47.957 22:02:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.957 22:02:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.957 22:02:42 -- setup/hugepages.sh@92 -- # local surp 00:03:47.957 22:02:42 -- setup/hugepages.sh@93 -- # local resv 00:03:47.957 22:02:42 -- setup/hugepages.sh@94 -- # local anon 00:03:47.957 22:02:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.957 22:02:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.957 22:02:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.957 22:02:42 -- setup/common.sh@18 -- # local node= 00:03:47.957 22:02:42 -- setup/common.sh@19 -- # local var val 00:03:47.957 22:02:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.957 22:02:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.957 22:02:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.957 22:02:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.957 22:02:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.957 22:02:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169303632 kB' 'MemAvailable: 172542356 kB' 'Buffers: 3896 kB' 'Cached: 15923580 kB' 'SwapCached: 0 kB' 'Active: 12789212 kB' 'Inactive: 3694312 kB' 'Active(anon): 12371256 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559420 kB' 'Mapped: 192580 kB' 'Shmem: 11815208 kB' 'KReclaimable: 541344 kB' 'Slab: 1200824 kB' 'SReclaimable: 541344 kB' 'SUnreclaim: 659480 kB' 'KernelStack: 20592 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13898372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.957 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.957 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.958 22:02:42 -- setup/common.sh@33 -- # echo 0 00:03:47.958 22:02:42 -- setup/common.sh@33 -- # return 0 00:03:47.958 22:02:42 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.958 22:02:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.958 22:02:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.958 22:02:42 -- setup/common.sh@18 -- # local node= 00:03:47.958 22:02:42 -- setup/common.sh@19 -- # local var val 00:03:47.958 22:02:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.958 22:02:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.958 22:02:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.958 22:02:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.958 22:02:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.958 22:02:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169304456 kB' 'MemAvailable: 172543180 kB' 'Buffers: 3896 kB' 'Cached: 15923584 kB' 'SwapCached: 0 kB' 'Active: 12789160 kB' 'Inactive: 3694312 kB' 'Active(anon): 12371204 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559376 kB' 'Mapped: 192580 kB' 'Shmem: 11815212 kB' 'KReclaimable: 541344 kB' 'Slab: 1200904 kB' 'SReclaimable: 541344 kB' 'SUnreclaim: 659560 kB' 'KernelStack: 20608 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13911284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.958 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.958 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.959 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.959 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.960 22:02:42 -- setup/common.sh@33 -- # echo 0 00:03:47.960 22:02:42 -- setup/common.sh@33 -- # return 0 00:03:47.960 22:02:42 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.960 22:02:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.960 22:02:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.960 22:02:42 -- setup/common.sh@18 -- # local node= 00:03:47.960 22:02:42 -- setup/common.sh@19 -- # local var val 00:03:47.960 22:02:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.960 22:02:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.960 22:02:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.960 22:02:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.960 22:02:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.960 22:02:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169304532 kB' 'MemAvailable: 172543256 kB' 'Buffers: 3896 kB' 'Cached: 15923584 kB' 'SwapCached: 0 kB' 'Active: 12788880 kB' 'Inactive: 3694312 kB' 'Active(anon): 12370924 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559096 kB' 'Mapped: 192580 kB' 'Shmem: 11815212 kB' 'KReclaimable: 541344 kB' 'Slab: 1200896 kB' 'SReclaimable: 541344 kB' 'SUnreclaim: 659552 kB' 'KernelStack: 20544 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13898032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317080 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.960 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.960 22:02:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.961 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.961 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.961 22:02:42 -- setup/common.sh@33 -- # echo 0 00:03:47.961 22:02:42 -- setup/common.sh@33 -- # return 0 00:03:47.961 22:02:42 -- setup/hugepages.sh@100 -- # resv=0 00:03:47.961 22:02:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.961 nr_hugepages=1024 00:03:47.961 22:02:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.961 resv_hugepages=0 00:03:47.961 22:02:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.961 surplus_hugepages=0 00:03:47.961 22:02:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.961 anon_hugepages=0 00:03:47.961 22:02:42 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.961 22:02:42 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.961 22:02:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.961 22:02:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.961 22:02:42 -- setup/common.sh@18 -- # local node= 00:03:47.961 22:02:42 -- setup/common.sh@19 -- # local var val 00:03:47.961 22:02:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.961 22:02:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.961 22:02:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.961 22:02:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.961 22:02:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.961 22:02:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169304908 kB' 'MemAvailable: 172543632 kB' 'Buffers: 3896 kB' 'Cached: 15923612 kB' 'SwapCached: 0 kB' 'Active: 12788592 kB' 'Inactive: 3694312 kB' 'Active(anon): 12370636 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558740 kB' 'Mapped: 192580 kB' 'Shmem: 11815240 kB' 'KReclaimable: 541344 kB' 'Slab: 1200896 kB' 'SReclaimable: 541344 kB' 'SUnreclaim: 659552 kB' 'KernelStack: 20528 kB' 'PageTables: 8544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 13898184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 111360 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3922900 kB' 'DirectMap2M: 33505280 kB' 'DirectMap1G: 164626432 kB' 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.962 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.962 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.963 22:02:42 -- setup/common.sh@33 -- # echo 1024 00:03:47.963 22:02:42 -- setup/common.sh@33 -- # return 0 00:03:47.963 22:02:42 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.963 22:02:42 -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.963 22:02:42 -- setup/hugepages.sh@27 -- # local node 00:03:47.963 22:02:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.963 22:02:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.963 22:02:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.963 22:02:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:47.963 22:02:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.963 22:02:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.963 22:02:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.963 22:02:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.963 22:02:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.963 22:02:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.963 22:02:42 -- setup/common.sh@18 -- # local node=0 00:03:47.963 22:02:42 -- setup/common.sh@19 -- # local var val 00:03:47.963 22:02:42 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.963 22:02:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.963 22:02:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.963 22:02:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.963 22:02:42 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.963 22:02:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90936972 kB' 'MemUsed: 6678656 kB' 'SwapCached: 0 kB' 'Active: 2974232 kB' 'Inactive: 218172 kB' 'Active(anon): 2812408 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 218172 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3058280 kB' 'Mapped: 101544 kB' 'AnonPages: 137404 kB' 'Shmem: 2678284 kB' 'KernelStack: 11368 kB' 'PageTables: 3476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 349956 kB' 'Slab: 649224 kB' 'SReclaimable: 349956 kB' 'SUnreclaim: 299268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.963 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.963 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # continue 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.964 22:02:42 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.964 22:02:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.964 22:02:42 -- setup/common.sh@33 -- # echo 0 00:03:47.964 22:02:42 -- setup/common.sh@33 -- # return 0 00:03:47.964 22:02:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.964 22:02:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.964 22:02:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.964 22:02:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.964 22:02:42 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.964 node0=1024 expecting 1024 00:03:47.964 22:02:42 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.964 00:03:47.964 real 0m5.838s 00:03:47.964 user 0m2.336s 00:03:47.964 sys 0m3.620s 00:03:47.964 22:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.964 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:03:47.964 ************************************ 00:03:47.964 END TEST no_shrink_alloc 00:03:47.964 ************************************ 00:03:47.964 22:02:42 -- setup/hugepages.sh@217 -- # clear_hp 00:03:47.964 22:02:42 -- setup/hugepages.sh@37 -- # local node hp 00:03:47.964 22:02:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.964 22:02:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.964 22:02:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.964 22:02:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.964 22:02:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.964 22:02:42 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:47.964 22:02:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.964 22:02:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.964 22:02:42 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:47.964 22:02:42 -- setup/hugepages.sh@41 -- # echo 0 00:03:47.964 22:02:42 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:47.964 22:02:42 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:47.964 00:03:47.964 real 0m21.582s 00:03:47.964 user 0m8.459s 00:03:47.964 sys 0m12.769s 00:03:47.964 22:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.964 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:03:47.964 ************************************ 00:03:47.964 END TEST hugepages 00:03:47.964 ************************************ 00:03:47.964 22:02:42 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:47.964 22:02:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:47.964 22:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:47.964 22:02:42 -- common/autotest_common.sh@10 -- # set +x 00:03:47.964 ************************************ 00:03:47.964 START TEST driver 00:03:47.964 ************************************ 00:03:47.964 22:02:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:47.965 * Looking for test storage... 00:03:47.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:47.965 22:02:42 -- setup/driver.sh@68 -- # setup reset 00:03:47.965 22:02:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.965 22:02:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.162 22:02:46 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:52.162 22:02:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.162 22:02:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.162 22:02:46 -- common/autotest_common.sh@10 -- # set +x 00:03:52.162 ************************************ 00:03:52.162 START TEST guess_driver 00:03:52.162 ************************************ 00:03:52.162 22:02:46 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:52.163 22:02:46 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:52.163 22:02:46 -- setup/driver.sh@47 -- # local fail=0 00:03:52.163 22:02:46 -- setup/driver.sh@49 -- # pick_driver 00:03:52.163 22:02:46 -- setup/driver.sh@36 -- # vfio 00:03:52.163 22:02:46 -- setup/driver.sh@21 -- # local iommu_grups 00:03:52.163 22:02:46 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:52.163 22:02:46 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:52.163 22:02:46 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:52.163 22:02:46 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:52.163 22:02:46 -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:52.163 22:02:46 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:52.163 22:02:46 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:52.163 22:02:46 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:52.163 22:02:46 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:52.163 22:02:46 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:52.163 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:52.163 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:52.163 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:52.163 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:52.163 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:52.163 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:52.163 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:52.163 22:02:46 -- setup/driver.sh@30 -- # return 0 00:03:52.163 22:02:46 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:52.163 22:02:46 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:52.163 22:02:46 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:52.163 22:02:46 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:52.163 Looking for driver=vfio-pci 00:03:52.163 22:02:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:52.163 22:02:46 -- setup/driver.sh@45 -- # setup output config 00:03:52.163 22:02:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.163 22:02:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.702 22:02:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.702 22:02:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.702 22:02:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.641 22:02:50 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:55.641 22:02:50 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:55.641 22:02:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:55.641 22:02:50 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:55.641 22:02:50 -- setup/driver.sh@65 -- # setup reset 00:03:55.641 22:02:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.641 22:02:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.837 00:03:59.837 real 0m7.475s 00:03:59.837 user 0m2.105s 00:03:59.837 sys 0m3.756s 00:03:59.837 22:02:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.837 22:02:54 -- common/autotest_common.sh@10 -- # set +x 00:03:59.837 ************************************ 00:03:59.837 END TEST guess_driver 00:03:59.837 ************************************ 00:03:59.837 00:03:59.837 real 0m11.573s 00:03:59.837 user 0m3.273s 00:03:59.837 sys 0m5.983s 00:03:59.837 22:02:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.837 22:02:54 -- common/autotest_common.sh@10 -- # set +x 00:03:59.837 ************************************ 00:03:59.837 END TEST driver 00:03:59.837 ************************************ 00:03:59.837 22:02:54 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:59.837 22:02:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:59.837 22:02:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:59.837 22:02:54 -- common/autotest_common.sh@10 -- # set +x 00:03:59.837 ************************************ 00:03:59.837 START TEST devices 00:03:59.837 ************************************ 00:03:59.837 22:02:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:59.837 * Looking for test storage... 00:03:59.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.837 22:02:54 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:59.837 22:02:54 -- setup/devices.sh@192 -- # setup reset 00:03:59.837 22:02:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.837 22:02:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.417 22:02:57 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:02.417 22:02:57 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:02.417 22:02:57 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:02.417 22:02:57 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:02.417 22:02:57 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:02.417 22:02:57 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:02.417 22:02:57 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:02.417 22:02:57 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:02.417 22:02:57 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:02.417 22:02:57 -- setup/devices.sh@196 -- # blocks=() 00:04:02.417 22:02:57 -- setup/devices.sh@196 -- # declare -a blocks 00:04:02.417 22:02:57 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:02.417 22:02:57 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:02.417 22:02:57 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:02.417 22:02:57 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:02.417 22:02:57 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:02.417 22:02:57 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:02.417 22:02:57 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:02.417 22:02:57 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:02.417 22:02:57 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:02.417 22:02:57 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:02.417 22:02:57 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:02.417 No valid GPT data, bailing 00:04:02.417 22:02:57 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:02.417 22:02:57 -- scripts/common.sh@393 -- # pt= 00:04:02.417 22:02:57 -- scripts/common.sh@394 -- # return 1 00:04:02.417 22:02:57 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:02.417 22:02:57 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:02.417 22:02:57 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:02.417 22:02:57 -- setup/common.sh@80 -- # echo 1000204886016 00:04:02.417 22:02:57 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:02.417 22:02:57 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:02.417 22:02:57 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:02.417 22:02:57 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:02.417 22:02:57 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:02.417 22:02:57 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:02.417 22:02:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:02.417 22:02:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:02.417 22:02:57 -- common/autotest_common.sh@10 -- # set +x 00:04:02.417 ************************************ 00:04:02.417 START TEST nvme_mount 00:04:02.417 ************************************ 00:04:02.417 22:02:57 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:02.417 22:02:57 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:02.417 22:02:57 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:02.417 22:02:57 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.417 22:02:57 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.417 22:02:57 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:02.417 22:02:57 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:02.417 22:02:57 -- setup/common.sh@40 -- # local part_no=1 00:04:02.417 22:02:57 -- setup/common.sh@41 -- # local size=1073741824 00:04:02.417 22:02:57 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:02.417 22:02:57 -- setup/common.sh@44 -- # parts=() 00:04:02.417 22:02:57 -- setup/common.sh@44 -- # local parts 00:04:02.417 22:02:57 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:02.417 22:02:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.417 22:02:57 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:02.417 22:02:57 -- setup/common.sh@46 -- # (( part++ )) 00:04:02.417 22:02:57 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:02.417 22:02:57 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:02.417 22:02:57 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:02.417 22:02:57 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:03.356 Creating new GPT entries in memory. 00:04:03.357 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:03.357 other utilities. 00:04:03.357 22:02:58 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:03.357 22:02:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.357 22:02:58 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:03.357 22:02:58 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:03.357 22:02:58 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:04.738 Creating new GPT entries in memory. 00:04:04.738 The operation has completed successfully. 00:04:04.738 22:02:59 -- setup/common.sh@57 -- # (( part++ )) 00:04:04.738 22:02:59 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.738 22:02:59 -- setup/common.sh@62 -- # wait 3355209 00:04:04.738 22:02:59 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.738 22:02:59 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:04.738 22:02:59 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.738 22:02:59 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:04.738 22:02:59 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:04.738 22:02:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.738 22:02:59 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.738 22:02:59 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:04.738 22:02:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:04.738 22:02:59 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.738 22:02:59 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.738 22:02:59 -- setup/devices.sh@53 -- # local found=0 00:04:04.738 22:02:59 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:04.738 22:02:59 -- setup/devices.sh@56 -- # : 00:04:04.738 22:02:59 -- setup/devices.sh@59 -- # local pci status 00:04:04.738 22:02:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:04.738 22:02:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.738 22:02:59 -- setup/devices.sh@47 -- # setup output config 00:04:04.738 22:02:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.738 22:02:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:07.277 22:03:02 -- setup/devices.sh@63 -- # found=1 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.277 22:03:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.277 22:03:02 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:07.277 22:03:02 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.277 22:03:02 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.277 22:03:02 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.277 22:03:02 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:07.277 22:03:02 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.277 22:03:02 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.277 22:03:02 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:07.277 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:07.277 22:03:02 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:07.277 22:03:02 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:07.537 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:07.537 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:07.537 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:07.537 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:07.537 22:03:02 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:07.537 22:03:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:07.537 22:03:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.537 22:03:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:07.537 22:03:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:07.797 22:03:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.797 22:03:02 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.797 22:03:02 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:07.798 22:03:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:07.798 22:03:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:07.798 22:03:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:07.798 22:03:02 -- setup/devices.sh@53 -- # local found=0 00:04:07.798 22:03:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.798 22:03:02 -- setup/devices.sh@56 -- # : 00:04:07.798 22:03:02 -- setup/devices.sh@59 -- # local pci status 00:04:07.798 22:03:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.798 22:03:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:07.798 22:03:02 -- setup/devices.sh@47 -- # setup output config 00:04:07.798 22:03:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.798 22:03:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:10.329 22:03:05 -- setup/devices.sh@63 -- # found=1 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.329 22:03:05 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:10.329 22:03:05 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.329 22:03:05 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.329 22:03:05 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.329 22:03:05 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.329 22:03:05 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:10.329 22:03:05 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:10.329 22:03:05 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:10.329 22:03:05 -- setup/devices.sh@50 -- # local mount_point= 00:04:10.329 22:03:05 -- setup/devices.sh@51 -- # local test_file= 00:04:10.329 22:03:05 -- setup/devices.sh@53 -- # local found=0 00:04:10.329 22:03:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:10.329 22:03:05 -- setup/devices.sh@59 -- # local pci status 00:04:10.329 22:03:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.329 22:03:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:10.329 22:03:05 -- setup/devices.sh@47 -- # setup output config 00:04:10.329 22:03:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.329 22:03:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.697 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.697 22:03:08 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:13.697 22:03:08 -- setup/devices.sh@63 -- # found=1 00:04:13.697 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.697 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.697 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.697 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.698 22:03:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.698 22:03:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.698 22:03:08 -- setup/devices.sh@68 -- # return 0 00:04:13.698 22:03:08 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:13.698 22:03:08 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.698 22:03:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.698 22:03:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.698 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.698 00:04:13.698 real 0m10.933s 00:04:13.698 user 0m3.294s 00:04:13.698 sys 0m5.505s 00:04:13.698 22:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.698 22:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:13.698 ************************************ 00:04:13.698 END TEST nvme_mount 00:04:13.698 ************************************ 00:04:13.698 22:03:08 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:13.698 22:03:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.698 22:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.698 22:03:08 -- common/autotest_common.sh@10 -- # set +x 00:04:13.698 ************************************ 00:04:13.698 START TEST dm_mount 00:04:13.698 ************************************ 00:04:13.698 22:03:08 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:13.698 22:03:08 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:13.698 22:03:08 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:13.698 22:03:08 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:13.698 22:03:08 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:13.698 22:03:08 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.698 22:03:08 -- setup/common.sh@40 -- # local part_no=2 00:04:13.698 22:03:08 -- setup/common.sh@41 -- # local size=1073741824 00:04:13.698 22:03:08 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.698 22:03:08 -- setup/common.sh@44 -- # parts=() 00:04:13.698 22:03:08 -- setup/common.sh@44 -- # local parts 00:04:13.698 22:03:08 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.698 22:03:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.698 22:03:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.698 22:03:08 -- setup/common.sh@46 -- # (( part++ )) 00:04:13.698 22:03:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.698 22:03:08 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.698 22:03:08 -- setup/common.sh@46 -- # (( part++ )) 00:04:13.698 22:03:08 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.698 22:03:08 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:13.698 22:03:08 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.698 22:03:08 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:14.635 Creating new GPT entries in memory. 00:04:14.635 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:14.635 other utilities. 00:04:14.635 22:03:09 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:14.635 22:03:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:14.635 22:03:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:14.635 22:03:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:14.635 22:03:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:15.572 Creating new GPT entries in memory. 00:04:15.572 The operation has completed successfully. 00:04:15.572 22:03:10 -- setup/common.sh@57 -- # (( part++ )) 00:04:15.572 22:03:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.572 22:03:10 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.572 22:03:10 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.572 22:03:10 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:16.509 The operation has completed successfully. 00:04:16.509 22:03:11 -- setup/common.sh@57 -- # (( part++ )) 00:04:16.509 22:03:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.509 22:03:11 -- setup/common.sh@62 -- # wait 3359757 00:04:16.509 22:03:11 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:16.509 22:03:11 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.509 22:03:11 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.509 22:03:11 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:16.509 22:03:11 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:16.509 22:03:11 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.509 22:03:11 -- setup/devices.sh@161 -- # break 00:04:16.509 22:03:11 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.509 22:03:11 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:16.509 22:03:11 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:16.509 22:03:11 -- setup/devices.sh@166 -- # dm=dm-2 00:04:16.509 22:03:11 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:16.509 22:03:11 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:16.509 22:03:11 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.509 22:03:11 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:16.509 22:03:11 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.509 22:03:11 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:16.509 22:03:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:16.509 22:03:11 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.509 22:03:11 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.509 22:03:11 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:16.509 22:03:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:16.509 22:03:11 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:16.509 22:03:11 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:16.509 22:03:11 -- setup/devices.sh@53 -- # local found=0 00:04:16.509 22:03:11 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:16.509 22:03:11 -- setup/devices.sh@56 -- # : 00:04:16.509 22:03:11 -- setup/devices.sh@59 -- # local pci status 00:04:16.509 22:03:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:16.509 22:03:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:16.509 22:03:11 -- setup/devices.sh@47 -- # setup output config 00:04:16.509 22:03:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.509 22:03:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:19.040 22:03:14 -- setup/devices.sh@63 -- # found=1 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.040 22:03:14 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.040 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.297 22:03:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.297 22:03:14 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:19.297 22:03:14 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.297 22:03:14 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.297 22:03:14 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.297 22:03:14 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.297 22:03:14 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:19.297 22:03:14 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:19.297 22:03:14 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:19.297 22:03:14 -- setup/devices.sh@50 -- # local mount_point= 00:04:19.297 22:03:14 -- setup/devices.sh@51 -- # local test_file= 00:04:19.297 22:03:14 -- setup/devices.sh@53 -- # local found=0 00:04:19.297 22:03:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.297 22:03:14 -- setup/devices.sh@59 -- # local pci status 00:04:19.297 22:03:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.297 22:03:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:19.297 22:03:14 -- setup/devices.sh@47 -- # setup output config 00:04:19.297 22:03:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.297 22:03:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:21.828 22:03:16 -- setup/devices.sh@63 -- # found=1 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.828 22:03:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.828 22:03:16 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:21.828 22:03:16 -- setup/devices.sh@68 -- # return 0 00:04:21.828 22:03:16 -- setup/devices.sh@187 -- # cleanup_dm 00:04:21.828 22:03:16 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.828 22:03:16 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:21.828 22:03:16 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:21.828 22:03:16 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:21.828 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:21.828 22:03:16 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:21.828 00:04:21.828 real 0m8.476s 00:04:21.828 user 0m2.085s 00:04:21.828 sys 0m3.452s 00:04:21.828 22:03:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.828 22:03:16 -- common/autotest_common.sh@10 -- # set +x 00:04:21.828 ************************************ 00:04:21.828 END TEST dm_mount 00:04:21.828 ************************************ 00:04:21.828 22:03:16 -- setup/devices.sh@1 -- # cleanup 00:04:21.828 22:03:16 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:21.828 22:03:16 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.828 22:03:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:21.828 22:03:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:21.828 22:03:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.086 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:22.086 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:22.086 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.086 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.086 22:03:17 -- setup/devices.sh@12 -- # cleanup_dm 00:04:22.086 22:03:17 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.086 22:03:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.086 22:03:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.086 22:03:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.086 22:03:17 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.086 22:03:17 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:22.086 00:04:22.086 real 0m22.918s 00:04:22.086 user 0m6.614s 00:04:22.086 sys 0m11.113s 00:04:22.086 22:03:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.086 22:03:17 -- common/autotest_common.sh@10 -- # set +x 00:04:22.086 ************************************ 00:04:22.086 END TEST devices 00:04:22.086 ************************************ 00:04:22.344 00:04:22.344 real 1m14.819s 00:04:22.344 user 0m24.349s 00:04:22.344 sys 0m41.305s 00:04:22.344 22:03:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.344 22:03:17 -- common/autotest_common.sh@10 -- # set +x 00:04:22.344 ************************************ 00:04:22.344 END TEST setup.sh 00:04:22.344 ************************************ 00:04:22.345 22:03:17 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:24.876 Hugepages 00:04:24.876 node hugesize free / total 00:04:24.876 node0 1048576kB 0 / 0 00:04:24.876 node0 2048kB 2048 / 2048 00:04:24.876 node1 1048576kB 0 / 0 00:04:24.876 node1 2048kB 0 / 0 00:04:24.876 00:04:24.876 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:24.876 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:24.876 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:24.876 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:24.876 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:24.876 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:24.876 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:24.876 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:24.876 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:24.876 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:24.876 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:24.876 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:24.876 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:24.876 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:24.876 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:24.876 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:24.876 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:24.876 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:24.876 22:03:19 -- spdk/autotest.sh@141 -- # uname -s 00:04:24.876 22:03:19 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:24.876 22:03:19 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:24.876 22:03:19 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.479 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.738 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.738 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.738 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.738 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.738 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.739 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.678 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:28.678 22:03:23 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:29.617 22:03:24 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:29.617 22:03:24 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:29.617 22:03:24 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:29.617 22:03:24 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:29.617 22:03:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:29.617 22:03:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:29.617 22:03:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.617 22:03:24 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.617 22:03:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:29.876 22:03:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:29.876 22:03:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:29.876 22:03:24 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.414 Waiting for block devices as requested 00:04:32.414 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:32.414 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:32.414 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:32.673 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:32.673 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:32.673 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:32.673 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:32.931 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:32.931 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:32.931 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:33.190 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:33.190 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:33.190 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:33.190 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:33.449 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:33.449 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:33.449 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:33.449 22:03:28 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:33.449 22:03:28 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:33.449 22:03:28 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:33.450 22:03:28 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:33.450 22:03:28 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:33.450 22:03:28 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:33.709 22:03:28 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:33.709 22:03:28 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:33.709 22:03:28 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:33.709 22:03:28 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:33.709 22:03:28 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:33.709 22:03:28 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:33.709 22:03:28 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:33.709 22:03:28 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:04:33.709 22:03:28 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:33.709 22:03:28 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:33.709 22:03:28 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:33.709 22:03:28 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:33.709 22:03:28 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:33.709 22:03:28 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:33.709 22:03:28 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:33.709 22:03:28 -- common/autotest_common.sh@1542 -- # continue 00:04:33.709 22:03:28 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:33.709 22:03:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:33.709 22:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:33.709 22:03:28 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:33.709 22:03:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:33.709 22:03:28 -- common/autotest_common.sh@10 -- # set +x 00:04:33.709 22:03:28 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.002 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:37.002 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:37.261 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:37.522 22:03:32 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:37.522 22:03:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:37.522 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:04:37.522 22:03:32 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:37.522 22:03:32 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:37.522 22:03:32 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:37.522 22:03:32 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:37.522 22:03:32 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:37.522 22:03:32 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:37.522 22:03:32 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:37.522 22:03:32 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:37.522 22:03:32 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:37.522 22:03:32 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:37.522 22:03:32 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:37.522 22:03:32 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:37.522 22:03:32 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:37.522 22:03:32 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:37.522 22:03:32 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:37.522 22:03:32 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:04:37.522 22:03:32 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:37.522 22:03:32 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:04:37.522 22:03:32 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:37.522 22:03:32 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:37.522 22:03:32 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3368907 00:04:37.522 22:03:32 -- common/autotest_common.sh@1583 -- # waitforlisten 3368907 00:04:37.522 22:03:32 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.522 22:03:32 -- common/autotest_common.sh@819 -- # '[' -z 3368907 ']' 00:04:37.522 22:03:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.522 22:03:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:37.522 22:03:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.522 22:03:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:37.522 22:03:32 -- common/autotest_common.sh@10 -- # set +x 00:04:37.522 [2024-07-24 22:03:32.649661] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:04:37.522 [2024-07-24 22:03:32.649709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3368907 ] 00:04:37.782 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.782 [2024-07-24 22:03:32.706735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.782 [2024-07-24 22:03:32.748552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:37.782 [2024-07-24 22:03:32.748699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.378 22:03:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:38.378 22:03:33 -- common/autotest_common.sh@852 -- # return 0 00:04:38.378 22:03:33 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:38.378 22:03:33 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:38.378 22:03:33 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:41.678 nvme0n1 00:04:41.678 22:03:36 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:41.678 [2024-07-24 22:03:36.582983] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:41.678 request: 00:04:41.678 { 00:04:41.678 "nvme_ctrlr_name": "nvme0", 00:04:41.678 "password": "test", 00:04:41.678 "method": "bdev_nvme_opal_revert", 00:04:41.678 "req_id": 1 00:04:41.678 } 00:04:41.678 Got JSON-RPC error response 00:04:41.678 response: 00:04:41.678 { 00:04:41.678 "code": -32602, 00:04:41.678 "message": "Invalid parameters" 00:04:41.678 } 00:04:41.678 22:03:36 -- common/autotest_common.sh@1589 -- # true 00:04:41.678 22:03:36 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:41.678 22:03:36 -- common/autotest_common.sh@1593 -- # killprocess 3368907 00:04:41.678 22:03:36 -- common/autotest_common.sh@926 -- # '[' -z 3368907 ']' 00:04:41.678 22:03:36 -- common/autotest_common.sh@930 -- # kill -0 3368907 00:04:41.678 22:03:36 -- common/autotest_common.sh@931 -- # uname 00:04:41.678 22:03:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:41.678 22:03:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3368907 00:04:41.678 22:03:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:41.678 22:03:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:41.678 22:03:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3368907' 00:04:41.678 killing process with pid 3368907 00:04:41.678 22:03:36 -- common/autotest_common.sh@945 -- # kill 3368907 00:04:41.678 22:03:36 -- common/autotest_common.sh@950 -- # wait 3368907 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.678 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.679 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:41.680 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:04:43.590 22:03:38 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:43.590 22:03:38 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:43.590 22:03:38 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:43.590 22:03:38 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:43.590 22:03:38 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:43.590 22:03:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:43.590 22:03:38 -- common/autotest_common.sh@10 -- # set +x 00:04:43.590 22:03:38 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:43.590 22:03:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.590 22:03:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.590 22:03:38 -- common/autotest_common.sh@10 -- # set +x 00:04:43.590 ************************************ 00:04:43.590 START TEST env 00:04:43.590 ************************************ 00:04:43.590 22:03:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:43.590 * Looking for test storage... 00:04:43.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:43.590 22:03:38 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.590 22:03:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.590 22:03:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.590 22:03:38 -- common/autotest_common.sh@10 -- # set +x 00:04:43.590 ************************************ 00:04:43.590 START TEST env_memory 00:04:43.590 ************************************ 00:04:43.590 22:03:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.590 00:04:43.590 00:04:43.590 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.590 http://cunit.sourceforge.net/ 00:04:43.590 00:04:43.590 00:04:43.590 Suite: memory 00:04:43.590 Test: alloc and free memory map ...[2024-07-24 22:03:38.386495] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.590 passed 00:04:43.590 Test: mem map translation ...[2024-07-24 22:03:38.404729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.590 [2024-07-24 22:03:38.404744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.591 [2024-07-24 22:03:38.404780] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.591 [2024-07-24 22:03:38.404786] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:43.591 passed 00:04:43.591 Test: mem map registration ...[2024-07-24 22:03:38.441745] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:43.591 [2024-07-24 22:03:38.441760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:43.591 passed 00:04:43.591 Test: mem map adjacent registrations ...passed 00:04:43.591 00:04:43.591 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.591 suites 1 1 n/a 0 0 00:04:43.591 tests 4 4 4 0 0 00:04:43.591 asserts 152 152 152 0 n/a 00:04:43.591 00:04:43.591 Elapsed time = 0.138 seconds 00:04:43.591 00:04:43.591 real 0m0.150s 00:04:43.591 user 0m0.146s 00:04:43.591 sys 0m0.004s 00:04:43.591 22:03:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.591 22:03:38 -- common/autotest_common.sh@10 -- # set +x 00:04:43.591 ************************************ 00:04:43.591 END TEST env_memory 00:04:43.591 ************************************ 00:04:43.591 22:03:38 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.591 22:03:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.591 22:03:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.591 22:03:38 -- common/autotest_common.sh@10 -- # set +x 00:04:43.591 ************************************ 00:04:43.591 START TEST env_vtophys 00:04:43.591 ************************************ 00:04:43.591 22:03:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.591 EAL: lib.eal log level changed from notice to debug 00:04:43.591 EAL: Detected lcore 0 as core 0 on socket 0 00:04:43.591 EAL: Detected lcore 1 as core 1 on socket 0 00:04:43.591 EAL: Detected lcore 2 as core 2 on socket 0 00:04:43.591 EAL: Detected lcore 3 as core 3 on socket 0 00:04:43.591 EAL: Detected lcore 4 as core 4 on socket 0 00:04:43.591 EAL: Detected lcore 5 as core 5 on socket 0 00:04:43.591 EAL: Detected lcore 6 as core 6 on socket 0 00:04:43.591 EAL: Detected lcore 7 as core 8 on socket 0 00:04:43.591 EAL: Detected lcore 8 as core 9 on socket 0 00:04:43.591 EAL: Detected lcore 9 as core 10 on socket 0 00:04:43.591 EAL: Detected lcore 10 as core 11 on socket 0 00:04:43.591 EAL: Detected lcore 11 as core 12 on socket 0 00:04:43.591 EAL: Detected lcore 12 as core 13 on socket 0 00:04:43.591 EAL: Detected lcore 13 as core 16 on socket 0 00:04:43.591 EAL: Detected lcore 14 as core 17 on socket 0 00:04:43.591 EAL: Detected lcore 15 as core 18 on socket 0 00:04:43.591 EAL: Detected lcore 16 as core 19 on socket 0 00:04:43.591 EAL: Detected lcore 17 as core 20 on socket 0 00:04:43.591 EAL: Detected lcore 18 as core 21 on socket 0 00:04:43.591 EAL: Detected lcore 19 as core 25 on socket 0 00:04:43.591 EAL: Detected lcore 20 as core 26 on socket 0 00:04:43.591 EAL: Detected lcore 21 as core 27 on socket 0 00:04:43.591 EAL: Detected lcore 22 as core 28 on socket 0 00:04:43.591 EAL: Detected lcore 23 as core 29 on socket 0 00:04:43.591 EAL: Detected lcore 24 as core 0 on socket 1 00:04:43.591 EAL: Detected lcore 25 as core 1 on socket 1 00:04:43.591 EAL: Detected lcore 26 as core 2 on socket 1 00:04:43.591 EAL: Detected lcore 27 as core 3 on socket 1 00:04:43.591 EAL: Detected lcore 28 as core 4 on socket 1 00:04:43.591 EAL: Detected lcore 29 as core 5 on socket 1 00:04:43.591 EAL: Detected lcore 30 as core 6 on socket 1 00:04:43.591 EAL: Detected lcore 31 as core 9 on socket 1 00:04:43.591 EAL: Detected lcore 32 as core 10 on socket 1 00:04:43.591 EAL: Detected lcore 33 as core 11 on socket 1 00:04:43.591 EAL: Detected lcore 34 as core 12 on socket 1 00:04:43.591 EAL: Detected lcore 35 as core 13 on socket 1 00:04:43.591 EAL: Detected lcore 36 as core 16 on socket 1 00:04:43.591 EAL: Detected lcore 37 as core 17 on socket 1 00:04:43.591 EAL: Detected lcore 38 as core 18 on socket 1 00:04:43.591 EAL: Detected lcore 39 as core 19 on socket 1 00:04:43.591 EAL: Detected lcore 40 as core 20 on socket 1 00:04:43.591 EAL: Detected lcore 41 as core 21 on socket 1 00:04:43.591 EAL: Detected lcore 42 as core 24 on socket 1 00:04:43.591 EAL: Detected lcore 43 as core 25 on socket 1 00:04:43.591 EAL: Detected lcore 44 as core 26 on socket 1 00:04:43.591 EAL: Detected lcore 45 as core 27 on socket 1 00:04:43.591 EAL: Detected lcore 46 as core 28 on socket 1 00:04:43.591 EAL: Detected lcore 47 as core 29 on socket 1 00:04:43.591 EAL: Detected lcore 48 as core 0 on socket 0 00:04:43.591 EAL: Detected lcore 49 as core 1 on socket 0 00:04:43.591 EAL: Detected lcore 50 as core 2 on socket 0 00:04:43.591 EAL: Detected lcore 51 as core 3 on socket 0 00:04:43.591 EAL: Detected lcore 52 as core 4 on socket 0 00:04:43.591 EAL: Detected lcore 53 as core 5 on socket 0 00:04:43.591 EAL: Detected lcore 54 as core 6 on socket 0 00:04:43.591 EAL: Detected lcore 55 as core 8 on socket 0 00:04:43.591 EAL: Detected lcore 56 as core 9 on socket 0 00:04:43.591 EAL: Detected lcore 57 as core 10 on socket 0 00:04:43.591 EAL: Detected lcore 58 as core 11 on socket 0 00:04:43.591 EAL: Detected lcore 59 as core 12 on socket 0 00:04:43.591 EAL: Detected lcore 60 as core 13 on socket 0 00:04:43.591 EAL: Detected lcore 61 as core 16 on socket 0 00:04:43.591 EAL: Detected lcore 62 as core 17 on socket 0 00:04:43.591 EAL: Detected lcore 63 as core 18 on socket 0 00:04:43.591 EAL: Detected lcore 64 as core 19 on socket 0 00:04:43.591 EAL: Detected lcore 65 as core 20 on socket 0 00:04:43.591 EAL: Detected lcore 66 as core 21 on socket 0 00:04:43.591 EAL: Detected lcore 67 as core 25 on socket 0 00:04:43.591 EAL: Detected lcore 68 as core 26 on socket 0 00:04:43.591 EAL: Detected lcore 69 as core 27 on socket 0 00:04:43.591 EAL: Detected lcore 70 as core 28 on socket 0 00:04:43.591 EAL: Detected lcore 71 as core 29 on socket 0 00:04:43.591 EAL: Detected lcore 72 as core 0 on socket 1 00:04:43.591 EAL: Detected lcore 73 as core 1 on socket 1 00:04:43.591 EAL: Detected lcore 74 as core 2 on socket 1 00:04:43.591 EAL: Detected lcore 75 as core 3 on socket 1 00:04:43.591 EAL: Detected lcore 76 as core 4 on socket 1 00:04:43.591 EAL: Detected lcore 77 as core 5 on socket 1 00:04:43.591 EAL: Detected lcore 78 as core 6 on socket 1 00:04:43.591 EAL: Detected lcore 79 as core 9 on socket 1 00:04:43.591 EAL: Detected lcore 80 as core 10 on socket 1 00:04:43.591 EAL: Detected lcore 81 as core 11 on socket 1 00:04:43.591 EAL: Detected lcore 82 as core 12 on socket 1 00:04:43.591 EAL: Detected lcore 83 as core 13 on socket 1 00:04:43.591 EAL: Detected lcore 84 as core 16 on socket 1 00:04:43.591 EAL: Detected lcore 85 as core 17 on socket 1 00:04:43.591 EAL: Detected lcore 86 as core 18 on socket 1 00:04:43.591 EAL: Detected lcore 87 as core 19 on socket 1 00:04:43.591 EAL: Detected lcore 88 as core 20 on socket 1 00:04:43.591 EAL: Detected lcore 89 as core 21 on socket 1 00:04:43.591 EAL: Detected lcore 90 as core 24 on socket 1 00:04:43.591 EAL: Detected lcore 91 as core 25 on socket 1 00:04:43.591 EAL: Detected lcore 92 as core 26 on socket 1 00:04:43.591 EAL: Detected lcore 93 as core 27 on socket 1 00:04:43.591 EAL: Detected lcore 94 as core 28 on socket 1 00:04:43.591 EAL: Detected lcore 95 as core 29 on socket 1 00:04:43.591 EAL: Maximum logical cores by configuration: 128 00:04:43.591 EAL: Detected CPU lcores: 96 00:04:43.591 EAL: Detected NUMA nodes: 2 00:04:43.591 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:43.591 EAL: Detected shared linkage of DPDK 00:04:43.591 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:43.591 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:43.591 EAL: Registered [vdev] bus. 00:04:43.591 EAL: bus.vdev log level changed from disabled to notice 00:04:43.591 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:43.591 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:43.591 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:43.591 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:43.591 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:43.591 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:43.591 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:43.591 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:43.591 EAL: No shared files mode enabled, IPC will be disabled 00:04:43.591 EAL: No shared files mode enabled, IPC is disabled 00:04:43.591 EAL: Bus pci wants IOVA as 'DC' 00:04:43.591 EAL: Bus vdev wants IOVA as 'DC' 00:04:43.591 EAL: Buses did not request a specific IOVA mode. 00:04:43.591 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:43.591 EAL: Selected IOVA mode 'VA' 00:04:43.591 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.591 EAL: Probing VFIO support... 00:04:43.591 EAL: IOMMU type 1 (Type 1) is supported 00:04:43.591 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:43.591 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:43.591 EAL: VFIO support initialized 00:04:43.591 EAL: Ask a virtual area of 0x2e000 bytes 00:04:43.591 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:43.591 EAL: Setting up physically contiguous memory... 00:04:43.591 EAL: Setting maximum number of open files to 524288 00:04:43.591 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:43.591 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:43.591 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:43.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.591 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:43.591 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.591 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.591 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:43.591 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:43.591 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.592 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:43.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.592 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:43.592 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:43.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.592 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:43.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.592 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:43.592 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:43.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.592 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:43.592 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.592 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:43.592 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:43.592 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:43.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.592 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:43.592 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.592 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:43.592 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:43.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.592 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:43.592 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.592 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:43.592 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:43.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.592 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:43.592 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.592 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:43.592 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:43.592 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.592 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:43.592 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.592 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.592 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:43.592 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:43.592 EAL: Hugepages will be freed exactly as allocated. 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: TSC frequency is ~2300000 KHz 00:04:43.592 EAL: Main lcore 0 is ready (tid=7f1dee7a7a00;cpuset=[0]) 00:04:43.592 EAL: Trying to obtain current memory policy. 00:04:43.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.592 EAL: Restoring previous memory policy: 0 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was expanded by 2MB 00:04:43.592 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:43.592 EAL: probe driver: 8086:37d2 net_i40e 00:04:43.592 EAL: Not managed by a supported kernel driver, skipped 00:04:43.592 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:43.592 EAL: probe driver: 8086:37d2 net_i40e 00:04:43.592 EAL: Not managed by a supported kernel driver, skipped 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:43.592 EAL: Mem event callback 'spdk:(nil)' registered 00:04:43.592 00:04:43.592 00:04:43.592 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.592 http://cunit.sourceforge.net/ 00:04:43.592 00:04:43.592 00:04:43.592 Suite: components_suite 00:04:43.592 Test: vtophys_malloc_test ...passed 00:04:43.592 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:43.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.592 EAL: Restoring previous memory policy: 4 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was expanded by 4MB 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was shrunk by 4MB 00:04:43.592 EAL: Trying to obtain current memory policy. 00:04:43.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.592 EAL: Restoring previous memory policy: 4 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was expanded by 6MB 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was shrunk by 6MB 00:04:43.592 EAL: Trying to obtain current memory policy. 00:04:43.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.592 EAL: Restoring previous memory policy: 4 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.592 EAL: Trying to obtain current memory policy. 00:04:43.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.592 EAL: Restoring previous memory policy: 4 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.592 EAL: Trying to obtain current memory policy. 00:04:43.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.592 EAL: Restoring previous memory policy: 4 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.592 EAL: Trying to obtain current memory policy. 00:04:43.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.592 EAL: Restoring previous memory policy: 4 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was expanded by 66MB 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was shrunk by 66MB 00:04:43.592 EAL: Trying to obtain current memory policy. 00:04:43.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.592 EAL: Restoring previous memory policy: 4 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was expanded by 130MB 00:04:43.592 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.592 EAL: request: mp_malloc_sync 00:04:43.592 EAL: No shared files mode enabled, IPC is disabled 00:04:43.592 EAL: Heap on socket 0 was shrunk by 130MB 00:04:43.592 EAL: Trying to obtain current memory policy. 00:04:43.592 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.853 EAL: Restoring previous memory policy: 4 00:04:43.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.853 EAL: request: mp_malloc_sync 00:04:43.853 EAL: No shared files mode enabled, IPC is disabled 00:04:43.853 EAL: Heap on socket 0 was expanded by 258MB 00:04:43.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.853 EAL: request: mp_malloc_sync 00:04:43.853 EAL: No shared files mode enabled, IPC is disabled 00:04:43.853 EAL: Heap on socket 0 was shrunk by 258MB 00:04:43.853 EAL: Trying to obtain current memory policy. 00:04:43.853 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.853 EAL: Restoring previous memory policy: 4 00:04:43.853 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.853 EAL: request: mp_malloc_sync 00:04:43.853 EAL: No shared files mode enabled, IPC is disabled 00:04:43.853 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.113 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.113 EAL: request: mp_malloc_sync 00:04:44.113 EAL: No shared files mode enabled, IPC is disabled 00:04:44.113 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.113 EAL: Trying to obtain current memory policy. 00:04:44.113 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.373 EAL: Restoring previous memory policy: 4 00:04:44.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.373 EAL: request: mp_malloc_sync 00:04:44.373 EAL: No shared files mode enabled, IPC is disabled 00:04:44.373 EAL: Heap on socket 0 was expanded by 1026MB 00:04:44.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.633 EAL: request: mp_malloc_sync 00:04:44.633 EAL: No shared files mode enabled, IPC is disabled 00:04:44.633 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.633 passed 00:04:44.633 00:04:44.633 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.633 suites 1 1 n/a 0 0 00:04:44.633 tests 2 2 2 0 0 00:04:44.633 asserts 497 497 497 0 n/a 00:04:44.633 00:04:44.633 Elapsed time = 0.960 seconds 00:04:44.633 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.633 EAL: request: mp_malloc_sync 00:04:44.633 EAL: No shared files mode enabled, IPC is disabled 00:04:44.633 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.633 EAL: No shared files mode enabled, IPC is disabled 00:04:44.633 EAL: No shared files mode enabled, IPC is disabled 00:04:44.633 EAL: No shared files mode enabled, IPC is disabled 00:04:44.633 00:04:44.633 real 0m1.068s 00:04:44.633 user 0m0.628s 00:04:44.633 sys 0m0.412s 00:04:44.633 22:03:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.633 22:03:39 -- common/autotest_common.sh@10 -- # set +x 00:04:44.633 ************************************ 00:04:44.633 END TEST env_vtophys 00:04:44.633 ************************************ 00:04:44.634 22:03:39 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.634 22:03:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.634 22:03:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.634 22:03:39 -- common/autotest_common.sh@10 -- # set +x 00:04:44.634 ************************************ 00:04:44.634 START TEST env_pci 00:04:44.634 ************************************ 00:04:44.634 22:03:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.634 00:04:44.634 00:04:44.634 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.634 http://cunit.sourceforge.net/ 00:04:44.634 00:04:44.634 00:04:44.634 Suite: pci 00:04:44.634 Test: pci_hook ...[2024-07-24 22:03:39.650432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3370244 has claimed it 00:04:44.634 EAL: Cannot find device (10000:00:01.0) 00:04:44.634 EAL: Failed to attach device on primary process 00:04:44.634 passed 00:04:44.634 00:04:44.634 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.634 suites 1 1 n/a 0 0 00:04:44.634 tests 1 1 1 0 0 00:04:44.634 asserts 25 25 25 0 n/a 00:04:44.634 00:04:44.634 Elapsed time = 0.029 seconds 00:04:44.634 00:04:44.634 real 0m0.047s 00:04:44.634 user 0m0.012s 00:04:44.634 sys 0m0.035s 00:04:44.634 22:03:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.634 22:03:39 -- common/autotest_common.sh@10 -- # set +x 00:04:44.634 ************************************ 00:04:44.634 END TEST env_pci 00:04:44.634 ************************************ 00:04:44.634 22:03:39 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:44.634 22:03:39 -- env/env.sh@15 -- # uname 00:04:44.634 22:03:39 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:44.634 22:03:39 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:44.634 22:03:39 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.634 22:03:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:44.634 22:03:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.634 22:03:39 -- common/autotest_common.sh@10 -- # set +x 00:04:44.634 ************************************ 00:04:44.634 START TEST env_dpdk_post_init 00:04:44.634 ************************************ 00:04:44.634 22:03:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.634 EAL: Detected CPU lcores: 96 00:04:44.634 EAL: Detected NUMA nodes: 2 00:04:44.634 EAL: Detected shared linkage of DPDK 00:04:44.634 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.894 EAL: Selected IOVA mode 'VA' 00:04:44.894 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.894 EAL: VFIO support initialized 00:04:44.894 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.894 EAL: Using IOMMU type 1 (Type 1) 00:04:44.894 EAL: Ignore mapping IO port bar(1) 00:04:44.894 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:44.894 EAL: Ignore mapping IO port bar(1) 00:04:44.894 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:44.894 EAL: Ignore mapping IO port bar(1) 00:04:44.894 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:44.894 EAL: Ignore mapping IO port bar(1) 00:04:44.894 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:44.894 EAL: Ignore mapping IO port bar(1) 00:04:44.894 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:44.894 EAL: Ignore mapping IO port bar(1) 00:04:44.894 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:44.894 EAL: Ignore mapping IO port bar(1) 00:04:44.894 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:44.894 EAL: Ignore mapping IO port bar(1) 00:04:44.894 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:45.834 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:45.834 EAL: Ignore mapping IO port bar(1) 00:04:45.834 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:45.834 EAL: Ignore mapping IO port bar(1) 00:04:45.834 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:45.834 EAL: Ignore mapping IO port bar(1) 00:04:45.834 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:45.834 EAL: Ignore mapping IO port bar(1) 00:04:45.834 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:45.834 EAL: Ignore mapping IO port bar(1) 00:04:45.834 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:45.834 EAL: Ignore mapping IO port bar(1) 00:04:45.834 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:45.834 EAL: Ignore mapping IO port bar(1) 00:04:45.834 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:45.834 EAL: Ignore mapping IO port bar(1) 00:04:45.834 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:49.146 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:49.146 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:49.146 Starting DPDK initialization... 00:04:49.146 Starting SPDK post initialization... 00:04:49.146 SPDK NVMe probe 00:04:49.146 Attaching to 0000:5e:00.0 00:04:49.146 Attached to 0000:5e:00.0 00:04:49.146 Cleaning up... 00:04:49.146 00:04:49.146 real 0m4.312s 00:04:49.146 user 0m3.249s 00:04:49.146 sys 0m0.133s 00:04:49.146 22:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.146 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:04:49.146 ************************************ 00:04:49.146 END TEST env_dpdk_post_init 00:04:49.146 ************************************ 00:04:49.146 22:03:44 -- env/env.sh@26 -- # uname 00:04:49.146 22:03:44 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.146 22:03:44 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.146 22:03:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.146 22:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.146 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:04:49.146 ************************************ 00:04:49.146 START TEST env_mem_callbacks 00:04:49.146 ************************************ 00:04:49.146 22:03:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.146 EAL: Detected CPU lcores: 96 00:04:49.146 EAL: Detected NUMA nodes: 2 00:04:49.146 EAL: Detected shared linkage of DPDK 00:04:49.146 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.146 EAL: Selected IOVA mode 'VA' 00:04:49.146 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.146 EAL: VFIO support initialized 00:04:49.146 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.146 00:04:49.146 00:04:49.146 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.146 http://cunit.sourceforge.net/ 00:04:49.146 00:04:49.146 00:04:49.146 Suite: memory 00:04:49.146 Test: test ... 00:04:49.146 register 0x200000200000 2097152 00:04:49.146 malloc 3145728 00:04:49.146 register 0x200000400000 4194304 00:04:49.146 buf 0x200000500000 len 3145728 PASSED 00:04:49.146 malloc 64 00:04:49.146 buf 0x2000004fff40 len 64 PASSED 00:04:49.146 malloc 4194304 00:04:49.146 register 0x200000800000 6291456 00:04:49.146 buf 0x200000a00000 len 4194304 PASSED 00:04:49.146 free 0x200000500000 3145728 00:04:49.146 free 0x2000004fff40 64 00:04:49.146 unregister 0x200000400000 4194304 PASSED 00:04:49.146 free 0x200000a00000 4194304 00:04:49.146 unregister 0x200000800000 6291456 PASSED 00:04:49.146 malloc 8388608 00:04:49.146 register 0x200000400000 10485760 00:04:49.146 buf 0x200000600000 len 8388608 PASSED 00:04:49.146 free 0x200000600000 8388608 00:04:49.146 unregister 0x200000400000 10485760 PASSED 00:04:49.146 passed 00:04:49.146 00:04:49.146 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.146 suites 1 1 n/a 0 0 00:04:49.146 tests 1 1 1 0 0 00:04:49.146 asserts 15 15 15 0 n/a 00:04:49.146 00:04:49.146 Elapsed time = 0.005 seconds 00:04:49.146 00:04:49.146 real 0m0.051s 00:04:49.146 user 0m0.015s 00:04:49.146 sys 0m0.036s 00:04:49.146 22:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.146 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:04:49.146 ************************************ 00:04:49.146 END TEST env_mem_callbacks 00:04:49.146 ************************************ 00:04:49.146 00:04:49.146 real 0m5.915s 00:04:49.146 user 0m4.159s 00:04:49.146 sys 0m0.837s 00:04:49.146 22:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.146 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:04:49.146 ************************************ 00:04:49.146 END TEST env 00:04:49.146 ************************************ 00:04:49.146 22:03:44 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.146 22:03:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.146 22:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.146 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:04:49.146 ************************************ 00:04:49.146 START TEST rpc 00:04:49.146 ************************************ 00:04:49.146 22:03:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.406 * Looking for test storage... 00:04:49.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.406 22:03:44 -- rpc/rpc.sh@65 -- # spdk_pid=3371063 00:04:49.406 22:03:44 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.406 22:03:44 -- rpc/rpc.sh@67 -- # waitforlisten 3371063 00:04:49.406 22:03:44 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:49.406 22:03:44 -- common/autotest_common.sh@819 -- # '[' -z 3371063 ']' 00:04:49.406 22:03:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.406 22:03:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:49.406 22:03:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.406 22:03:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:49.406 22:03:44 -- common/autotest_common.sh@10 -- # set +x 00:04:49.406 [2024-07-24 22:03:44.327698] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:04:49.406 [2024-07-24 22:03:44.327748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371063 ] 00:04:49.406 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.406 [2024-07-24 22:03:44.383205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.406 [2024-07-24 22:03:44.422721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:49.406 [2024-07-24 22:03:44.422830] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.406 [2024-07-24 22:03:44.422839] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3371063' to capture a snapshot of events at runtime. 00:04:49.406 [2024-07-24 22:03:44.422847] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3371063 for offline analysis/debug. 00:04:49.406 [2024-07-24 22:03:44.422870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.052 22:03:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:50.052 22:03:45 -- common/autotest_common.sh@852 -- # return 0 00:04:50.052 22:03:45 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.052 22:03:45 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.052 22:03:45 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.052 22:03:45 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.052 22:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.052 22:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.052 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.052 ************************************ 00:04:50.052 START TEST rpc_integrity 00:04:50.052 ************************************ 00:04:50.052 22:03:45 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:50.052 22:03:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.052 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.052 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.052 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.052 22:03:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.052 22:03:45 -- rpc/rpc.sh@13 -- # jq length 00:04:50.052 22:03:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.052 22:03:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.052 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.052 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.052 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.052 22:03:45 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.052 22:03:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.052 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.052 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.312 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.312 22:03:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.312 { 00:04:50.312 "name": "Malloc0", 00:04:50.312 "aliases": [ 00:04:50.312 "8405548b-7117-418e-94da-569e0dd3c0e4" 00:04:50.312 ], 00:04:50.312 "product_name": "Malloc disk", 00:04:50.312 "block_size": 512, 00:04:50.312 "num_blocks": 16384, 00:04:50.312 "uuid": "8405548b-7117-418e-94da-569e0dd3c0e4", 00:04:50.312 "assigned_rate_limits": { 00:04:50.312 "rw_ios_per_sec": 0, 00:04:50.312 "rw_mbytes_per_sec": 0, 00:04:50.312 "r_mbytes_per_sec": 0, 00:04:50.312 "w_mbytes_per_sec": 0 00:04:50.312 }, 00:04:50.312 "claimed": false, 00:04:50.312 "zoned": false, 00:04:50.312 "supported_io_types": { 00:04:50.312 "read": true, 00:04:50.312 "write": true, 00:04:50.312 "unmap": true, 00:04:50.312 "write_zeroes": true, 00:04:50.312 "flush": true, 00:04:50.312 "reset": true, 00:04:50.312 "compare": false, 00:04:50.312 "compare_and_write": false, 00:04:50.312 "abort": true, 00:04:50.312 "nvme_admin": false, 00:04:50.312 "nvme_io": false 00:04:50.312 }, 00:04:50.312 "memory_domains": [ 00:04:50.312 { 00:04:50.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.312 "dma_device_type": 2 00:04:50.312 } 00:04:50.312 ], 00:04:50.312 "driver_specific": {} 00:04:50.312 } 00:04:50.312 ]' 00:04:50.312 22:03:45 -- rpc/rpc.sh@17 -- # jq length 00:04:50.312 22:03:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.313 22:03:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.313 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.313 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.313 [2024-07-24 22:03:45.238414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.313 [2024-07-24 22:03:45.238445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.313 [2024-07-24 22:03:45.238456] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21ade60 00:04:50.313 [2024-07-24 22:03:45.238463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.313 [2024-07-24 22:03:45.239469] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.313 [2024-07-24 22:03:45.239489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.313 Passthru0 00:04:50.313 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.313 22:03:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.313 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.313 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.313 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.313 22:03:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.313 { 00:04:50.313 "name": "Malloc0", 00:04:50.313 "aliases": [ 00:04:50.313 "8405548b-7117-418e-94da-569e0dd3c0e4" 00:04:50.313 ], 00:04:50.313 "product_name": "Malloc disk", 00:04:50.313 "block_size": 512, 00:04:50.313 "num_blocks": 16384, 00:04:50.313 "uuid": "8405548b-7117-418e-94da-569e0dd3c0e4", 00:04:50.313 "assigned_rate_limits": { 00:04:50.313 "rw_ios_per_sec": 0, 00:04:50.313 "rw_mbytes_per_sec": 0, 00:04:50.313 "r_mbytes_per_sec": 0, 00:04:50.313 "w_mbytes_per_sec": 0 00:04:50.313 }, 00:04:50.313 "claimed": true, 00:04:50.313 "claim_type": "exclusive_write", 00:04:50.313 "zoned": false, 00:04:50.313 "supported_io_types": { 00:04:50.313 "read": true, 00:04:50.313 "write": true, 00:04:50.313 "unmap": true, 00:04:50.313 "write_zeroes": true, 00:04:50.313 "flush": true, 00:04:50.313 "reset": true, 00:04:50.313 "compare": false, 00:04:50.313 "compare_and_write": false, 00:04:50.313 "abort": true, 00:04:50.313 "nvme_admin": false, 00:04:50.313 "nvme_io": false 00:04:50.313 }, 00:04:50.313 "memory_domains": [ 00:04:50.313 { 00:04:50.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.313 "dma_device_type": 2 00:04:50.313 } 00:04:50.313 ], 00:04:50.313 "driver_specific": {} 00:04:50.313 }, 00:04:50.313 { 00:04:50.313 "name": "Passthru0", 00:04:50.313 "aliases": [ 00:04:50.313 "720197e9-3d1f-5433-b2b4-d11284d0def8" 00:04:50.313 ], 00:04:50.313 "product_name": "passthru", 00:04:50.313 "block_size": 512, 00:04:50.313 "num_blocks": 16384, 00:04:50.313 "uuid": "720197e9-3d1f-5433-b2b4-d11284d0def8", 00:04:50.313 "assigned_rate_limits": { 00:04:50.313 "rw_ios_per_sec": 0, 00:04:50.313 "rw_mbytes_per_sec": 0, 00:04:50.313 "r_mbytes_per_sec": 0, 00:04:50.313 "w_mbytes_per_sec": 0 00:04:50.313 }, 00:04:50.313 "claimed": false, 00:04:50.313 "zoned": false, 00:04:50.313 "supported_io_types": { 00:04:50.313 "read": true, 00:04:50.313 "write": true, 00:04:50.313 "unmap": true, 00:04:50.313 "write_zeroes": true, 00:04:50.313 "flush": true, 00:04:50.313 "reset": true, 00:04:50.313 "compare": false, 00:04:50.313 "compare_and_write": false, 00:04:50.313 "abort": true, 00:04:50.313 "nvme_admin": false, 00:04:50.313 "nvme_io": false 00:04:50.313 }, 00:04:50.313 "memory_domains": [ 00:04:50.313 { 00:04:50.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.313 "dma_device_type": 2 00:04:50.313 } 00:04:50.313 ], 00:04:50.313 "driver_specific": { 00:04:50.313 "passthru": { 00:04:50.313 "name": "Passthru0", 00:04:50.313 "base_bdev_name": "Malloc0" 00:04:50.313 } 00:04:50.313 } 00:04:50.313 } 00:04:50.313 ]' 00:04:50.313 22:03:45 -- rpc/rpc.sh@21 -- # jq length 00:04:50.313 22:03:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.313 22:03:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.313 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.313 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.313 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.313 22:03:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.313 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.313 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.313 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.313 22:03:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.313 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.313 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.313 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.313 22:03:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.313 22:03:45 -- rpc/rpc.sh@26 -- # jq length 00:04:50.313 22:03:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.313 00:04:50.313 real 0m0.244s 00:04:50.313 user 0m0.159s 00:04:50.313 sys 0m0.031s 00:04:50.313 22:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.313 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.313 ************************************ 00:04:50.313 END TEST rpc_integrity 00:04:50.313 ************************************ 00:04:50.313 22:03:45 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.313 22:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.313 22:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.313 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.313 ************************************ 00:04:50.313 START TEST rpc_plugins 00:04:50.313 ************************************ 00:04:50.313 22:03:45 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:50.313 22:03:45 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:50.313 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.313 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.313 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.313 22:03:45 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:50.313 22:03:45 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:50.313 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.313 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.313 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.313 22:03:45 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:50.313 { 00:04:50.313 "name": "Malloc1", 00:04:50.313 "aliases": [ 00:04:50.313 "642490e4-f88a-4bf3-a65f-6052d9a44786" 00:04:50.313 ], 00:04:50.313 "product_name": "Malloc disk", 00:04:50.313 "block_size": 4096, 00:04:50.313 "num_blocks": 256, 00:04:50.313 "uuid": "642490e4-f88a-4bf3-a65f-6052d9a44786", 00:04:50.313 "assigned_rate_limits": { 00:04:50.313 "rw_ios_per_sec": 0, 00:04:50.313 "rw_mbytes_per_sec": 0, 00:04:50.313 "r_mbytes_per_sec": 0, 00:04:50.313 "w_mbytes_per_sec": 0 00:04:50.313 }, 00:04:50.313 "claimed": false, 00:04:50.313 "zoned": false, 00:04:50.313 "supported_io_types": { 00:04:50.313 "read": true, 00:04:50.313 "write": true, 00:04:50.313 "unmap": true, 00:04:50.313 "write_zeroes": true, 00:04:50.313 "flush": true, 00:04:50.313 "reset": true, 00:04:50.313 "compare": false, 00:04:50.313 "compare_and_write": false, 00:04:50.313 "abort": true, 00:04:50.313 "nvme_admin": false, 00:04:50.313 "nvme_io": false 00:04:50.313 }, 00:04:50.313 "memory_domains": [ 00:04:50.313 { 00:04:50.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.313 "dma_device_type": 2 00:04:50.313 } 00:04:50.313 ], 00:04:50.313 "driver_specific": {} 00:04:50.313 } 00:04:50.313 ]' 00:04:50.313 22:03:45 -- rpc/rpc.sh@32 -- # jq length 00:04:50.573 22:03:45 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:50.573 22:03:45 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:50.573 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.573 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.573 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.573 22:03:45 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:50.573 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.573 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.573 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.573 22:03:45 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:50.573 22:03:45 -- rpc/rpc.sh@36 -- # jq length 00:04:50.573 22:03:45 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:50.573 00:04:50.573 real 0m0.127s 00:04:50.573 user 0m0.082s 00:04:50.573 sys 0m0.017s 00:04:50.573 22:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.573 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.573 ************************************ 00:04:50.573 END TEST rpc_plugins 00:04:50.573 ************************************ 00:04:50.573 22:03:45 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:50.573 22:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.573 22:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.573 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.573 ************************************ 00:04:50.573 START TEST rpc_trace_cmd_test 00:04:50.573 ************************************ 00:04:50.573 22:03:45 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:50.573 22:03:45 -- rpc/rpc.sh@40 -- # local info 00:04:50.573 22:03:45 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:50.573 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.573 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.573 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.573 22:03:45 -- rpc/rpc.sh@42 -- # info='{ 00:04:50.573 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3371063", 00:04:50.573 "tpoint_group_mask": "0x8", 00:04:50.573 "iscsi_conn": { 00:04:50.573 "mask": "0x2", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "scsi": { 00:04:50.573 "mask": "0x4", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "bdev": { 00:04:50.573 "mask": "0x8", 00:04:50.573 "tpoint_mask": "0xffffffffffffffff" 00:04:50.573 }, 00:04:50.573 "nvmf_rdma": { 00:04:50.573 "mask": "0x10", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "nvmf_tcp": { 00:04:50.573 "mask": "0x20", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "ftl": { 00:04:50.573 "mask": "0x40", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "blobfs": { 00:04:50.573 "mask": "0x80", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "dsa": { 00:04:50.573 "mask": "0x200", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "thread": { 00:04:50.573 "mask": "0x400", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "nvme_pcie": { 00:04:50.573 "mask": "0x800", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "iaa": { 00:04:50.573 "mask": "0x1000", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "nvme_tcp": { 00:04:50.573 "mask": "0x2000", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 }, 00:04:50.573 "bdev_nvme": { 00:04:50.573 "mask": "0x4000", 00:04:50.573 "tpoint_mask": "0x0" 00:04:50.573 } 00:04:50.573 }' 00:04:50.573 22:03:45 -- rpc/rpc.sh@43 -- # jq length 00:04:50.573 22:03:45 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:50.573 22:03:45 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:50.573 22:03:45 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:50.573 22:03:45 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:50.573 22:03:45 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:50.574 22:03:45 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:50.834 22:03:45 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:50.834 22:03:45 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:50.834 22:03:45 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:50.834 00:04:50.834 real 0m0.187s 00:04:50.834 user 0m0.159s 00:04:50.834 sys 0m0.020s 00:04:50.834 22:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.834 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.834 ************************************ 00:04:50.834 END TEST rpc_trace_cmd_test 00:04:50.834 ************************************ 00:04:50.834 22:03:45 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:50.834 22:03:45 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:50.834 22:03:45 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:50.834 22:03:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.834 22:03:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.834 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.834 ************************************ 00:04:50.834 START TEST rpc_daemon_integrity 00:04:50.834 ************************************ 00:04:50.834 22:03:45 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:50.834 22:03:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.834 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.834 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.834 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.834 22:03:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.834 22:03:45 -- rpc/rpc.sh@13 -- # jq length 00:04:50.834 22:03:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.834 22:03:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.834 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.834 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.834 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.834 22:03:45 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:50.834 22:03:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.834 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.834 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.834 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.834 22:03:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.834 { 00:04:50.834 "name": "Malloc2", 00:04:50.834 "aliases": [ 00:04:50.834 "24b19d23-aa99-4657-8c75-be0f831378bf" 00:04:50.834 ], 00:04:50.834 "product_name": "Malloc disk", 00:04:50.834 "block_size": 512, 00:04:50.834 "num_blocks": 16384, 00:04:50.834 "uuid": "24b19d23-aa99-4657-8c75-be0f831378bf", 00:04:50.834 "assigned_rate_limits": { 00:04:50.834 "rw_ios_per_sec": 0, 00:04:50.834 "rw_mbytes_per_sec": 0, 00:04:50.834 "r_mbytes_per_sec": 0, 00:04:50.834 "w_mbytes_per_sec": 0 00:04:50.834 }, 00:04:50.834 "claimed": false, 00:04:50.834 "zoned": false, 00:04:50.834 "supported_io_types": { 00:04:50.834 "read": true, 00:04:50.834 "write": true, 00:04:50.834 "unmap": true, 00:04:50.834 "write_zeroes": true, 00:04:50.834 "flush": true, 00:04:50.834 "reset": true, 00:04:50.834 "compare": false, 00:04:50.834 "compare_and_write": false, 00:04:50.834 "abort": true, 00:04:50.834 "nvme_admin": false, 00:04:50.834 "nvme_io": false 00:04:50.834 }, 00:04:50.834 "memory_domains": [ 00:04:50.834 { 00:04:50.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.834 "dma_device_type": 2 00:04:50.834 } 00:04:50.834 ], 00:04:50.834 "driver_specific": {} 00:04:50.834 } 00:04:50.834 ]' 00:04:50.834 22:03:45 -- rpc/rpc.sh@17 -- # jq length 00:04:50.834 22:03:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.834 22:03:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:50.834 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.834 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.834 [2024-07-24 22:03:45.924280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:50.834 [2024-07-24 22:03:45.924307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.834 [2024-07-24 22:03:45.924322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21b1540 00:04:50.834 [2024-07-24 22:03:45.924328] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.834 [2024-07-24 22:03:45.925250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.834 [2024-07-24 22:03:45.925273] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.834 Passthru0 00:04:50.834 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.834 22:03:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.834 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.834 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:50.834 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.834 22:03:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.834 { 00:04:50.834 "name": "Malloc2", 00:04:50.834 "aliases": [ 00:04:50.834 "24b19d23-aa99-4657-8c75-be0f831378bf" 00:04:50.834 ], 00:04:50.834 "product_name": "Malloc disk", 00:04:50.834 "block_size": 512, 00:04:50.834 "num_blocks": 16384, 00:04:50.834 "uuid": "24b19d23-aa99-4657-8c75-be0f831378bf", 00:04:50.834 "assigned_rate_limits": { 00:04:50.834 "rw_ios_per_sec": 0, 00:04:50.834 "rw_mbytes_per_sec": 0, 00:04:50.834 "r_mbytes_per_sec": 0, 00:04:50.834 "w_mbytes_per_sec": 0 00:04:50.834 }, 00:04:50.834 "claimed": true, 00:04:50.834 "claim_type": "exclusive_write", 00:04:50.834 "zoned": false, 00:04:50.834 "supported_io_types": { 00:04:50.834 "read": true, 00:04:50.834 "write": true, 00:04:50.834 "unmap": true, 00:04:50.834 "write_zeroes": true, 00:04:50.834 "flush": true, 00:04:50.834 "reset": true, 00:04:50.834 "compare": false, 00:04:50.834 "compare_and_write": false, 00:04:50.834 "abort": true, 00:04:50.834 "nvme_admin": false, 00:04:50.834 "nvme_io": false 00:04:50.834 }, 00:04:50.834 "memory_domains": [ 00:04:50.834 { 00:04:50.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.834 "dma_device_type": 2 00:04:50.834 } 00:04:50.834 ], 00:04:50.834 "driver_specific": {} 00:04:50.834 }, 00:04:50.834 { 00:04:50.834 "name": "Passthru0", 00:04:50.834 "aliases": [ 00:04:50.834 "39397ae0-3d2e-591b-bb02-2be086b29387" 00:04:50.834 ], 00:04:50.834 "product_name": "passthru", 00:04:50.834 "block_size": 512, 00:04:50.834 "num_blocks": 16384, 00:04:50.834 "uuid": "39397ae0-3d2e-591b-bb02-2be086b29387", 00:04:50.834 "assigned_rate_limits": { 00:04:50.834 "rw_ios_per_sec": 0, 00:04:50.834 "rw_mbytes_per_sec": 0, 00:04:50.834 "r_mbytes_per_sec": 0, 00:04:50.834 "w_mbytes_per_sec": 0 00:04:50.834 }, 00:04:50.834 "claimed": false, 00:04:50.834 "zoned": false, 00:04:50.834 "supported_io_types": { 00:04:50.834 "read": true, 00:04:50.834 "write": true, 00:04:50.834 "unmap": true, 00:04:50.834 "write_zeroes": true, 00:04:50.834 "flush": true, 00:04:50.834 "reset": true, 00:04:50.834 "compare": false, 00:04:50.834 "compare_and_write": false, 00:04:50.834 "abort": true, 00:04:50.834 "nvme_admin": false, 00:04:50.834 "nvme_io": false 00:04:50.834 }, 00:04:50.834 "memory_domains": [ 00:04:50.834 { 00:04:50.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.834 "dma_device_type": 2 00:04:50.834 } 00:04:50.834 ], 00:04:50.834 "driver_specific": { 00:04:50.834 "passthru": { 00:04:50.834 "name": "Passthru0", 00:04:50.834 "base_bdev_name": "Malloc2" 00:04:50.834 } 00:04:50.834 } 00:04:50.834 } 00:04:50.834 ]' 00:04:50.834 22:03:45 -- rpc/rpc.sh@21 -- # jq length 00:04:51.094 22:03:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.094 22:03:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.094 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:51.094 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:51.094 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:51.094 22:03:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:51.094 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:51.094 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:51.094 22:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:51.094 22:03:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.094 22:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:51.094 22:03:45 -- common/autotest_common.sh@10 -- # set +x 00:04:51.094 22:03:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:51.094 22:03:46 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.094 22:03:46 -- rpc/rpc.sh@26 -- # jq length 00:04:51.094 22:03:46 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.094 00:04:51.094 real 0m0.248s 00:04:51.094 user 0m0.166s 00:04:51.094 sys 0m0.028s 00:04:51.094 22:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.094 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.094 ************************************ 00:04:51.094 END TEST rpc_daemon_integrity 00:04:51.094 ************************************ 00:04:51.094 22:03:46 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:51.094 22:03:46 -- rpc/rpc.sh@84 -- # killprocess 3371063 00:04:51.094 22:03:46 -- common/autotest_common.sh@926 -- # '[' -z 3371063 ']' 00:04:51.094 22:03:46 -- common/autotest_common.sh@930 -- # kill -0 3371063 00:04:51.094 22:03:46 -- common/autotest_common.sh@931 -- # uname 00:04:51.094 22:03:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:51.094 22:03:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3371063 00:04:51.094 22:03:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:51.094 22:03:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:51.094 22:03:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3371063' 00:04:51.094 killing process with pid 3371063 00:04:51.094 22:03:46 -- common/autotest_common.sh@945 -- # kill 3371063 00:04:51.094 22:03:46 -- common/autotest_common.sh@950 -- # wait 3371063 00:04:51.353 00:04:51.353 real 0m2.214s 00:04:51.353 user 0m2.830s 00:04:51.353 sys 0m0.571s 00:04:51.353 22:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.353 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.353 ************************************ 00:04:51.353 END TEST rpc 00:04:51.353 ************************************ 00:04:51.353 22:03:46 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.353 22:03:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.353 22:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.353 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.353 ************************************ 00:04:51.353 START TEST rpc_client 00:04:51.353 ************************************ 00:04:51.353 22:03:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:51.612 * Looking for test storage... 00:04:51.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:51.612 22:03:46 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:51.612 OK 00:04:51.612 22:03:46 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:51.612 00:04:51.612 real 0m0.106s 00:04:51.612 user 0m0.063s 00:04:51.612 sys 0m0.051s 00:04:51.612 22:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.612 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.612 ************************************ 00:04:51.613 END TEST rpc_client 00:04:51.613 ************************************ 00:04:51.613 22:03:46 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.613 22:03:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.613 22:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.613 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.613 ************************************ 00:04:51.613 START TEST json_config 00:04:51.613 ************************************ 00:04:51.613 22:03:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:51.613 22:03:46 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.613 22:03:46 -- nvmf/common.sh@7 -- # uname -s 00:04:51.613 22:03:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.613 22:03:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.613 22:03:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.613 22:03:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.613 22:03:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.613 22:03:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.613 22:03:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.613 22:03:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.613 22:03:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.613 22:03:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.613 22:03:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:51.613 22:03:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:51.613 22:03:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.613 22:03:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.613 22:03:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.613 22:03:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:51.613 22:03:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.613 22:03:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.613 22:03:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.613 22:03:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.613 22:03:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.613 22:03:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.613 22:03:46 -- paths/export.sh@5 -- # export PATH 00:04:51.613 22:03:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.613 22:03:46 -- nvmf/common.sh@46 -- # : 0 00:04:51.613 22:03:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:51.613 22:03:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:51.613 22:03:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:51.613 22:03:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.613 22:03:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.613 22:03:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:51.613 22:03:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:51.613 22:03:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:51.613 22:03:46 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:51.613 22:03:46 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:51.613 22:03:46 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:51.613 22:03:46 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.613 22:03:46 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:51.613 22:03:46 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:51.613 22:03:46 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:51.613 22:03:46 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:51.613 22:03:46 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:51.613 22:03:46 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:51.613 22:03:46 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:51.613 22:03:46 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:51.613 22:03:46 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:51.613 22:03:46 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.613 22:03:46 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:51.613 INFO: JSON configuration test init 00:04:51.613 22:03:46 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:51.613 22:03:46 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:51.613 22:03:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:51.613 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.613 22:03:46 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:51.613 22:03:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:51.613 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.613 22:03:46 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:51.613 22:03:46 -- json_config/json_config.sh@98 -- # local app=target 00:04:51.613 22:03:46 -- json_config/json_config.sh@99 -- # shift 00:04:51.613 22:03:46 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:51.613 22:03:46 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:51.613 22:03:46 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:51.613 22:03:46 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:51.613 22:03:46 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:51.613 22:03:46 -- json_config/json_config.sh@111 -- # app_pid[$app]=3371735 00:04:51.613 22:03:46 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:51.613 Waiting for target to run... 00:04:51.613 22:03:46 -- json_config/json_config.sh@114 -- # waitforlisten 3371735 /var/tmp/spdk_tgt.sock 00:04:51.613 22:03:46 -- common/autotest_common.sh@819 -- # '[' -z 3371735 ']' 00:04:51.613 22:03:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.613 22:03:46 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:51.613 22:03:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.613 22:03:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.613 22:03:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.613 22:03:46 -- common/autotest_common.sh@10 -- # set +x 00:04:51.872 [2024-07-24 22:03:46.750427] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:04:51.872 [2024-07-24 22:03:46.750479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371735 ] 00:04:51.872 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.130 [2024-07-24 22:03:47.012864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.130 [2024-07-24 22:03:47.035497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.130 [2024-07-24 22:03:47.035599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.698 22:03:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:52.698 22:03:47 -- common/autotest_common.sh@852 -- # return 0 00:04:52.698 22:03:47 -- json_config/json_config.sh@115 -- # echo '' 00:04:52.698 00:04:52.698 22:03:47 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:52.698 22:03:47 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:52.698 22:03:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:52.698 22:03:47 -- common/autotest_common.sh@10 -- # set +x 00:04:52.698 22:03:47 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:52.698 22:03:47 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:52.698 22:03:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:52.698 22:03:47 -- common/autotest_common.sh@10 -- # set +x 00:04:52.698 22:03:47 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:52.698 22:03:47 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:52.698 22:03:47 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:55.990 22:03:50 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:55.990 22:03:50 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:55.990 22:03:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:55.990 22:03:50 -- common/autotest_common.sh@10 -- # set +x 00:04:55.990 22:03:50 -- json_config/json_config.sh@48 -- # local ret=0 00:04:55.990 22:03:50 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:55.990 22:03:50 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:55.990 22:03:50 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:55.990 22:03:50 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:55.990 22:03:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:55.990 22:03:50 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:55.990 22:03:50 -- json_config/json_config.sh@51 -- # local get_types 00:04:55.990 22:03:50 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:55.990 22:03:50 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:55.990 22:03:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:55.990 22:03:50 -- common/autotest_common.sh@10 -- # set +x 00:04:55.990 22:03:50 -- json_config/json_config.sh@58 -- # return 0 00:04:55.990 22:03:50 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:55.990 22:03:50 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:55.990 22:03:50 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:55.990 22:03:50 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:55.990 22:03:50 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:55.990 22:03:50 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:55.990 22:03:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:55.990 22:03:50 -- common/autotest_common.sh@10 -- # set +x 00:04:55.990 22:03:50 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:55.990 22:03:50 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:55.990 22:03:50 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:55.990 22:03:50 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.990 22:03:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.990 MallocForNvmf0 00:04:55.990 22:03:51 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.990 22:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:56.250 MallocForNvmf1 00:04:56.250 22:03:51 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:56.250 22:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:56.250 [2024-07-24 22:03:51.337126] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:56.250 22:03:51 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.250 22:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.509 22:03:51 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.509 22:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.768 22:03:51 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.768 22:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.768 22:03:51 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:56.768 22:03:51 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:57.027 [2024-07-24 22:03:52.019252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:57.027 22:03:52 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:57.027 22:03:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:57.027 22:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:57.027 22:03:52 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:57.027 22:03:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:57.027 22:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:57.027 22:03:52 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:57.027 22:03:52 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.027 22:03:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:57.287 MallocBdevForConfigChangeCheck 00:04:57.287 22:03:52 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:57.287 22:03:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:57.287 22:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:57.287 22:03:52 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:57.287 22:03:52 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.547 22:03:52 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:57.547 INFO: shutting down applications... 00:04:57.547 22:03:52 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:57.547 22:03:52 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:57.547 22:03:52 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:57.547 22:03:52 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:59.456 Calling clear_iscsi_subsystem 00:04:59.456 Calling clear_nvmf_subsystem 00:04:59.456 Calling clear_nbd_subsystem 00:04:59.456 Calling clear_ublk_subsystem 00:04:59.456 Calling clear_vhost_blk_subsystem 00:04:59.456 Calling clear_vhost_scsi_subsystem 00:04:59.456 Calling clear_scheduler_subsystem 00:04:59.456 Calling clear_bdev_subsystem 00:04:59.456 Calling clear_accel_subsystem 00:04:59.456 Calling clear_vmd_subsystem 00:04:59.456 Calling clear_sock_subsystem 00:04:59.456 Calling clear_iobuf_subsystem 00:04:59.456 22:03:54 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:59.456 22:03:54 -- json_config/json_config.sh@396 -- # count=100 00:04:59.456 22:03:54 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:59.456 22:03:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.456 22:03:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:59.456 22:03:54 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:59.456 22:03:54 -- json_config/json_config.sh@398 -- # break 00:04:59.456 22:03:54 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:59.456 22:03:54 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:59.456 22:03:54 -- json_config/json_config.sh@120 -- # local app=target 00:04:59.456 22:03:54 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:59.456 22:03:54 -- json_config/json_config.sh@124 -- # [[ -n 3371735 ]] 00:04:59.456 22:03:54 -- json_config/json_config.sh@127 -- # kill -SIGINT 3371735 00:04:59.456 22:03:54 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:59.456 22:03:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:59.456 22:03:54 -- json_config/json_config.sh@130 -- # kill -0 3371735 00:04:59.456 22:03:54 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:00.028 22:03:54 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:00.028 22:03:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:00.028 22:03:54 -- json_config/json_config.sh@130 -- # kill -0 3371735 00:05:00.028 22:03:54 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:00.028 22:03:54 -- json_config/json_config.sh@132 -- # break 00:05:00.028 22:03:54 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:00.028 22:03:54 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:00.028 SPDK target shutdown done 00:05:00.028 22:03:54 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:00.028 INFO: relaunching applications... 00:05:00.028 22:03:54 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.028 22:03:54 -- json_config/json_config.sh@98 -- # local app=target 00:05:00.028 22:03:54 -- json_config/json_config.sh@99 -- # shift 00:05:00.028 22:03:54 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:00.028 22:03:54 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:00.028 22:03:54 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:00.028 22:03:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:00.028 22:03:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:00.028 22:03:54 -- json_config/json_config.sh@111 -- # app_pid[$app]=3373263 00:05:00.028 22:03:54 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:00.028 Waiting for target to run... 00:05:00.028 22:03:54 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.028 22:03:54 -- json_config/json_config.sh@114 -- # waitforlisten 3373263 /var/tmp/spdk_tgt.sock 00:05:00.028 22:03:54 -- common/autotest_common.sh@819 -- # '[' -z 3373263 ']' 00:05:00.028 22:03:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.028 22:03:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.028 22:03:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.028 22:03:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.028 22:03:54 -- common/autotest_common.sh@10 -- # set +x 00:05:00.028 [2024-07-24 22:03:55.021963] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:00.028 [2024-07-24 22:03:55.022024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3373263 ] 00:05:00.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.598 [2024-07-24 22:03:55.451676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.598 [2024-07-24 22:03:55.482454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.598 [2024-07-24 22:03:55.482569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.893 [2024-07-24 22:03:58.466918] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.893 [2024-07-24 22:03:58.499246] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.153 22:03:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:04.153 22:03:59 -- common/autotest_common.sh@852 -- # return 0 00:05:04.153 22:03:59 -- json_config/json_config.sh@115 -- # echo '' 00:05:04.153 00:05:04.153 22:03:59 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:04.153 22:03:59 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:04.153 INFO: Checking if target configuration is the same... 00:05:04.153 22:03:59 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:04.153 22:03:59 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.153 22:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.153 + '[' 2 -ne 2 ']' 00:05:04.153 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:04.153 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:04.153 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.153 +++ basename /dev/fd/62 00:05:04.153 ++ mktemp /tmp/62.XXX 00:05:04.153 + tmp_file_1=/tmp/62.PzF 00:05:04.153 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.153 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.153 + tmp_file_2=/tmp/spdk_tgt_config.json.LyX 00:05:04.153 + ret=0 00:05:04.153 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:04.412 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:04.412 + diff -u /tmp/62.PzF /tmp/spdk_tgt_config.json.LyX 00:05:04.413 + echo 'INFO: JSON config files are the same' 00:05:04.413 INFO: JSON config files are the same 00:05:04.413 + rm /tmp/62.PzF /tmp/spdk_tgt_config.json.LyX 00:05:04.413 + exit 0 00:05:04.413 22:03:59 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:04.413 22:03:59 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:04.413 INFO: changing configuration and checking if this can be detected... 00:05:04.413 22:03:59 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:04.413 22:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:04.682 22:03:59 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:04.682 22:03:59 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.683 22:03:59 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.683 + '[' 2 -ne 2 ']' 00:05:04.683 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:04.683 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:04.683 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.683 +++ basename /dev/fd/62 00:05:04.683 ++ mktemp /tmp/62.XXX 00:05:04.683 + tmp_file_1=/tmp/62.qBb 00:05:04.683 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.683 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.683 + tmp_file_2=/tmp/spdk_tgt_config.json.89K 00:05:04.683 + ret=0 00:05:04.683 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:04.948 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:04.948 + diff -u /tmp/62.qBb /tmp/spdk_tgt_config.json.89K 00:05:04.948 + ret=1 00:05:04.948 + echo '=== Start of file: /tmp/62.qBb ===' 00:05:04.948 + cat /tmp/62.qBb 00:05:04.948 + echo '=== End of file: /tmp/62.qBb ===' 00:05:04.948 + echo '' 00:05:04.948 + echo '=== Start of file: /tmp/spdk_tgt_config.json.89K ===' 00:05:04.948 + cat /tmp/spdk_tgt_config.json.89K 00:05:04.948 + echo '=== End of file: /tmp/spdk_tgt_config.json.89K ===' 00:05:04.948 + echo '' 00:05:04.948 + rm /tmp/62.qBb /tmp/spdk_tgt_config.json.89K 00:05:04.948 + exit 1 00:05:04.948 22:03:59 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:04.948 INFO: configuration change detected. 00:05:04.948 22:03:59 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:04.948 22:03:59 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:04.949 22:03:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:04.949 22:03:59 -- common/autotest_common.sh@10 -- # set +x 00:05:04.949 22:03:59 -- json_config/json_config.sh@360 -- # local ret=0 00:05:04.949 22:03:59 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:04.949 22:03:59 -- json_config/json_config.sh@370 -- # [[ -n 3373263 ]] 00:05:04.949 22:03:59 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:04.949 22:03:59 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:04.949 22:03:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:04.949 22:03:59 -- common/autotest_common.sh@10 -- # set +x 00:05:04.949 22:03:59 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:04.949 22:03:59 -- json_config/json_config.sh@246 -- # uname -s 00:05:04.949 22:03:59 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:04.949 22:03:59 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:04.949 22:03:59 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:04.949 22:03:59 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:04.949 22:03:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:04.949 22:03:59 -- common/autotest_common.sh@10 -- # set +x 00:05:04.949 22:04:00 -- json_config/json_config.sh@376 -- # killprocess 3373263 00:05:04.949 22:04:00 -- common/autotest_common.sh@926 -- # '[' -z 3373263 ']' 00:05:04.949 22:04:00 -- common/autotest_common.sh@930 -- # kill -0 3373263 00:05:04.949 22:04:00 -- common/autotest_common.sh@931 -- # uname 00:05:04.949 22:04:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:04.949 22:04:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3373263 00:05:04.949 22:04:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:04.949 22:04:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:04.949 22:04:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3373263' 00:05:04.949 killing process with pid 3373263 00:05:04.949 22:04:00 -- common/autotest_common.sh@945 -- # kill 3373263 00:05:04.949 22:04:00 -- common/autotest_common.sh@950 -- # wait 3373263 00:05:06.860 22:04:01 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.860 22:04:01 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:06.860 22:04:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:06.860 22:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:06.860 22:04:01 -- json_config/json_config.sh@381 -- # return 0 00:05:06.860 22:04:01 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:06.860 INFO: Success 00:05:06.860 00:05:06.860 real 0m14.953s 00:05:06.860 user 0m15.977s 00:05:06.860 sys 0m1.946s 00:05:06.860 22:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.860 22:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:06.860 ************************************ 00:05:06.860 END TEST json_config 00:05:06.860 ************************************ 00:05:06.860 22:04:01 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:06.860 22:04:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.860 22:04:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.860 22:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:06.860 ************************************ 00:05:06.860 START TEST json_config_extra_key 00:05:06.860 ************************************ 00:05:06.860 22:04:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.860 22:04:01 -- nvmf/common.sh@7 -- # uname -s 00:05:06.860 22:04:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.860 22:04:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.860 22:04:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.860 22:04:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.860 22:04:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.860 22:04:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.860 22:04:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.860 22:04:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.860 22:04:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.860 22:04:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.860 22:04:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.860 22:04:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.860 22:04:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.860 22:04:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.860 22:04:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.860 22:04:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.860 22:04:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.860 22:04:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.860 22:04:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.860 22:04:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.860 22:04:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.860 22:04:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.860 22:04:01 -- paths/export.sh@5 -- # export PATH 00:05:06.860 22:04:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.860 22:04:01 -- nvmf/common.sh@46 -- # : 0 00:05:06.860 22:04:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:06.860 22:04:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:06.860 22:04:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:06.860 22:04:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.860 22:04:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.860 22:04:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:06.860 22:04:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:06.860 22:04:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:06.860 INFO: launching applications... 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3374548 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:06.860 Waiting for target to run... 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3374548 /var/tmp/spdk_tgt.sock 00:05:06.860 22:04:01 -- common/autotest_common.sh@819 -- # '[' -z 3374548 ']' 00:05:06.860 22:04:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.860 22:04:01 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:06.860 22:04:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:06.860 22:04:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.860 22:04:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:06.860 22:04:01 -- common/autotest_common.sh@10 -- # set +x 00:05:06.860 [2024-07-24 22:04:01.737436] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:06.860 [2024-07-24 22:04:01.737489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3374548 ] 00:05:06.860 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.133 [2024-07-24 22:04:02.013746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.133 [2024-07-24 22:04:02.036524] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:07.133 [2024-07-24 22:04:02.036623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.424 22:04:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:07.424 22:04:02 -- common/autotest_common.sh@852 -- # return 0 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:07.424 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:07.424 INFO: shutting down applications... 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3374548 ]] 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3374548 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3374548 00:05:07.424 22:04:02 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:07.995 22:04:03 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:07.995 22:04:03 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:07.995 22:04:03 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3374548 00:05:07.995 22:04:03 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:07.995 22:04:03 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:07.995 22:04:03 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:07.995 22:04:03 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:07.995 SPDK target shutdown done 00:05:07.995 22:04:03 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:07.995 Success 00:05:07.995 00:05:07.995 real 0m1.424s 00:05:07.995 user 0m1.184s 00:05:07.995 sys 0m0.359s 00:05:07.995 22:04:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.995 22:04:03 -- common/autotest_common.sh@10 -- # set +x 00:05:07.995 ************************************ 00:05:07.995 END TEST json_config_extra_key 00:05:07.995 ************************************ 00:05:07.996 22:04:03 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.996 22:04:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.996 22:04:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.996 22:04:03 -- common/autotest_common.sh@10 -- # set +x 00:05:07.996 ************************************ 00:05:07.996 START TEST alias_rpc 00:05:07.996 ************************************ 00:05:07.996 22:04:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.256 * Looking for test storage... 00:05:08.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:08.256 22:04:03 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.256 22:04:03 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3374837 00:05:08.256 22:04:03 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3374837 00:05:08.256 22:04:03 -- common/autotest_common.sh@819 -- # '[' -z 3374837 ']' 00:05:08.256 22:04:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.256 22:04:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.256 22:04:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.256 22:04:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.256 22:04:03 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.256 22:04:03 -- common/autotest_common.sh@10 -- # set +x 00:05:08.256 [2024-07-24 22:04:03.185120] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:08.256 [2024-07-24 22:04:03.185175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3374837 ] 00:05:08.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.256 [2024-07-24 22:04:03.239032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.256 [2024-07-24 22:04:03.279237] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.256 [2024-07-24 22:04:03.279355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.196 22:04:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.196 22:04:03 -- common/autotest_common.sh@852 -- # return 0 00:05:09.196 22:04:03 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:09.196 22:04:04 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3374837 00:05:09.196 22:04:04 -- common/autotest_common.sh@926 -- # '[' -z 3374837 ']' 00:05:09.196 22:04:04 -- common/autotest_common.sh@930 -- # kill -0 3374837 00:05:09.196 22:04:04 -- common/autotest_common.sh@931 -- # uname 00:05:09.196 22:04:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:09.196 22:04:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3374837 00:05:09.196 22:04:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:09.196 22:04:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:09.196 22:04:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3374837' 00:05:09.196 killing process with pid 3374837 00:05:09.196 22:04:04 -- common/autotest_common.sh@945 -- # kill 3374837 00:05:09.196 22:04:04 -- common/autotest_common.sh@950 -- # wait 3374837 00:05:09.456 00:05:09.456 real 0m1.434s 00:05:09.456 user 0m1.570s 00:05:09.456 sys 0m0.367s 00:05:09.456 22:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.456 22:04:04 -- common/autotest_common.sh@10 -- # set +x 00:05:09.456 ************************************ 00:05:09.456 END TEST alias_rpc 00:05:09.456 ************************************ 00:05:09.456 22:04:04 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:09.456 22:04:04 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.456 22:04:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.456 22:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.456 22:04:04 -- common/autotest_common.sh@10 -- # set +x 00:05:09.456 ************************************ 00:05:09.456 START TEST spdkcli_tcp 00:05:09.456 ************************************ 00:05:09.456 22:04:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:09.717 * Looking for test storage... 00:05:09.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:09.717 22:04:04 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:09.717 22:04:04 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:09.717 22:04:04 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:09.717 22:04:04 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.717 22:04:04 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.717 22:04:04 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.717 22:04:04 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.717 22:04:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:09.717 22:04:04 -- common/autotest_common.sh@10 -- # set +x 00:05:09.717 22:04:04 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3375119 00:05:09.717 22:04:04 -- spdkcli/tcp.sh@27 -- # waitforlisten 3375119 00:05:09.717 22:04:04 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:09.717 22:04:04 -- common/autotest_common.sh@819 -- # '[' -z 3375119 ']' 00:05:09.717 22:04:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.717 22:04:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.717 22:04:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.717 22:04:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.717 22:04:04 -- common/autotest_common.sh@10 -- # set +x 00:05:09.717 [2024-07-24 22:04:04.676531] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:09.717 [2024-07-24 22:04:04.676582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375119 ] 00:05:09.717 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.717 [2024-07-24 22:04:04.730753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.717 [2024-07-24 22:04:04.769528] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:09.717 [2024-07-24 22:04:04.769703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.717 [2024-07-24 22:04:04.769705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.655 22:04:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:10.655 22:04:05 -- common/autotest_common.sh@852 -- # return 0 00:05:10.655 22:04:05 -- spdkcli/tcp.sh@31 -- # socat_pid=3375226 00:05:10.655 22:04:05 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:10.655 22:04:05 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:10.655 [ 00:05:10.655 "bdev_malloc_delete", 00:05:10.655 "bdev_malloc_create", 00:05:10.655 "bdev_null_resize", 00:05:10.655 "bdev_null_delete", 00:05:10.655 "bdev_null_create", 00:05:10.655 "bdev_nvme_cuse_unregister", 00:05:10.655 "bdev_nvme_cuse_register", 00:05:10.655 "bdev_opal_new_user", 00:05:10.655 "bdev_opal_set_lock_state", 00:05:10.655 "bdev_opal_delete", 00:05:10.655 "bdev_opal_get_info", 00:05:10.655 "bdev_opal_create", 00:05:10.655 "bdev_nvme_opal_revert", 00:05:10.655 "bdev_nvme_opal_init", 00:05:10.655 "bdev_nvme_send_cmd", 00:05:10.655 "bdev_nvme_get_path_iostat", 00:05:10.655 "bdev_nvme_get_mdns_discovery_info", 00:05:10.655 "bdev_nvme_stop_mdns_discovery", 00:05:10.655 "bdev_nvme_start_mdns_discovery", 00:05:10.655 "bdev_nvme_set_multipath_policy", 00:05:10.655 "bdev_nvme_set_preferred_path", 00:05:10.655 "bdev_nvme_get_io_paths", 00:05:10.655 "bdev_nvme_remove_error_injection", 00:05:10.655 "bdev_nvme_add_error_injection", 00:05:10.655 "bdev_nvme_get_discovery_info", 00:05:10.655 "bdev_nvme_stop_discovery", 00:05:10.655 "bdev_nvme_start_discovery", 00:05:10.655 "bdev_nvme_get_controller_health_info", 00:05:10.655 "bdev_nvme_disable_controller", 00:05:10.655 "bdev_nvme_enable_controller", 00:05:10.655 "bdev_nvme_reset_controller", 00:05:10.655 "bdev_nvme_get_transport_statistics", 00:05:10.655 "bdev_nvme_apply_firmware", 00:05:10.655 "bdev_nvme_detach_controller", 00:05:10.655 "bdev_nvme_get_controllers", 00:05:10.655 "bdev_nvme_attach_controller", 00:05:10.655 "bdev_nvme_set_hotplug", 00:05:10.655 "bdev_nvme_set_options", 00:05:10.655 "bdev_passthru_delete", 00:05:10.655 "bdev_passthru_create", 00:05:10.655 "bdev_lvol_grow_lvstore", 00:05:10.655 "bdev_lvol_get_lvols", 00:05:10.655 "bdev_lvol_get_lvstores", 00:05:10.655 "bdev_lvol_delete", 00:05:10.655 "bdev_lvol_set_read_only", 00:05:10.655 "bdev_lvol_resize", 00:05:10.655 "bdev_lvol_decouple_parent", 00:05:10.655 "bdev_lvol_inflate", 00:05:10.655 "bdev_lvol_rename", 00:05:10.655 "bdev_lvol_clone_bdev", 00:05:10.655 "bdev_lvol_clone", 00:05:10.655 "bdev_lvol_snapshot", 00:05:10.655 "bdev_lvol_create", 00:05:10.655 "bdev_lvol_delete_lvstore", 00:05:10.655 "bdev_lvol_rename_lvstore", 00:05:10.655 "bdev_lvol_create_lvstore", 00:05:10.655 "bdev_raid_set_options", 00:05:10.655 "bdev_raid_remove_base_bdev", 00:05:10.655 "bdev_raid_add_base_bdev", 00:05:10.655 "bdev_raid_delete", 00:05:10.655 "bdev_raid_create", 00:05:10.655 "bdev_raid_get_bdevs", 00:05:10.655 "bdev_error_inject_error", 00:05:10.655 "bdev_error_delete", 00:05:10.655 "bdev_error_create", 00:05:10.655 "bdev_split_delete", 00:05:10.655 "bdev_split_create", 00:05:10.655 "bdev_delay_delete", 00:05:10.655 "bdev_delay_create", 00:05:10.655 "bdev_delay_update_latency", 00:05:10.655 "bdev_zone_block_delete", 00:05:10.655 "bdev_zone_block_create", 00:05:10.655 "blobfs_create", 00:05:10.655 "blobfs_detect", 00:05:10.655 "blobfs_set_cache_size", 00:05:10.655 "bdev_aio_delete", 00:05:10.655 "bdev_aio_rescan", 00:05:10.655 "bdev_aio_create", 00:05:10.655 "bdev_ftl_set_property", 00:05:10.655 "bdev_ftl_get_properties", 00:05:10.655 "bdev_ftl_get_stats", 00:05:10.655 "bdev_ftl_unmap", 00:05:10.655 "bdev_ftl_unload", 00:05:10.655 "bdev_ftl_delete", 00:05:10.655 "bdev_ftl_load", 00:05:10.655 "bdev_ftl_create", 00:05:10.655 "bdev_virtio_attach_controller", 00:05:10.655 "bdev_virtio_scsi_get_devices", 00:05:10.655 "bdev_virtio_detach_controller", 00:05:10.655 "bdev_virtio_blk_set_hotplug", 00:05:10.655 "bdev_iscsi_delete", 00:05:10.655 "bdev_iscsi_create", 00:05:10.655 "bdev_iscsi_set_options", 00:05:10.655 "accel_error_inject_error", 00:05:10.655 "ioat_scan_accel_module", 00:05:10.655 "dsa_scan_accel_module", 00:05:10.655 "iaa_scan_accel_module", 00:05:10.655 "vfu_virtio_create_scsi_endpoint", 00:05:10.655 "vfu_virtio_scsi_remove_target", 00:05:10.655 "vfu_virtio_scsi_add_target", 00:05:10.655 "vfu_virtio_create_blk_endpoint", 00:05:10.655 "vfu_virtio_delete_endpoint", 00:05:10.655 "iscsi_set_options", 00:05:10.655 "iscsi_get_auth_groups", 00:05:10.655 "iscsi_auth_group_remove_secret", 00:05:10.655 "iscsi_auth_group_add_secret", 00:05:10.655 "iscsi_delete_auth_group", 00:05:10.655 "iscsi_create_auth_group", 00:05:10.655 "iscsi_set_discovery_auth", 00:05:10.655 "iscsi_get_options", 00:05:10.655 "iscsi_target_node_request_logout", 00:05:10.655 "iscsi_target_node_set_redirect", 00:05:10.655 "iscsi_target_node_set_auth", 00:05:10.655 "iscsi_target_node_add_lun", 00:05:10.655 "iscsi_get_connections", 00:05:10.655 "iscsi_portal_group_set_auth", 00:05:10.655 "iscsi_start_portal_group", 00:05:10.655 "iscsi_delete_portal_group", 00:05:10.655 "iscsi_create_portal_group", 00:05:10.655 "iscsi_get_portal_groups", 00:05:10.655 "iscsi_delete_target_node", 00:05:10.655 "iscsi_target_node_remove_pg_ig_maps", 00:05:10.655 "iscsi_target_node_add_pg_ig_maps", 00:05:10.655 "iscsi_create_target_node", 00:05:10.655 "iscsi_get_target_nodes", 00:05:10.655 "iscsi_delete_initiator_group", 00:05:10.655 "iscsi_initiator_group_remove_initiators", 00:05:10.655 "iscsi_initiator_group_add_initiators", 00:05:10.655 "iscsi_create_initiator_group", 00:05:10.655 "iscsi_get_initiator_groups", 00:05:10.655 "nvmf_set_crdt", 00:05:10.655 "nvmf_set_config", 00:05:10.655 "nvmf_set_max_subsystems", 00:05:10.655 "nvmf_subsystem_get_listeners", 00:05:10.655 "nvmf_subsystem_get_qpairs", 00:05:10.655 "nvmf_subsystem_get_controllers", 00:05:10.655 "nvmf_get_stats", 00:05:10.655 "nvmf_get_transports", 00:05:10.655 "nvmf_create_transport", 00:05:10.655 "nvmf_get_targets", 00:05:10.655 "nvmf_delete_target", 00:05:10.655 "nvmf_create_target", 00:05:10.655 "nvmf_subsystem_allow_any_host", 00:05:10.655 "nvmf_subsystem_remove_host", 00:05:10.655 "nvmf_subsystem_add_host", 00:05:10.655 "nvmf_subsystem_remove_ns", 00:05:10.655 "nvmf_subsystem_add_ns", 00:05:10.655 "nvmf_subsystem_listener_set_ana_state", 00:05:10.655 "nvmf_discovery_get_referrals", 00:05:10.655 "nvmf_discovery_remove_referral", 00:05:10.655 "nvmf_discovery_add_referral", 00:05:10.655 "nvmf_subsystem_remove_listener", 00:05:10.655 "nvmf_subsystem_add_listener", 00:05:10.655 "nvmf_delete_subsystem", 00:05:10.655 "nvmf_create_subsystem", 00:05:10.655 "nvmf_get_subsystems", 00:05:10.655 "env_dpdk_get_mem_stats", 00:05:10.655 "nbd_get_disks", 00:05:10.655 "nbd_stop_disk", 00:05:10.655 "nbd_start_disk", 00:05:10.655 "ublk_recover_disk", 00:05:10.655 "ublk_get_disks", 00:05:10.655 "ublk_stop_disk", 00:05:10.655 "ublk_start_disk", 00:05:10.655 "ublk_destroy_target", 00:05:10.655 "ublk_create_target", 00:05:10.655 "virtio_blk_create_transport", 00:05:10.655 "virtio_blk_get_transports", 00:05:10.655 "vhost_controller_set_coalescing", 00:05:10.655 "vhost_get_controllers", 00:05:10.655 "vhost_delete_controller", 00:05:10.655 "vhost_create_blk_controller", 00:05:10.655 "vhost_scsi_controller_remove_target", 00:05:10.656 "vhost_scsi_controller_add_target", 00:05:10.656 "vhost_start_scsi_controller", 00:05:10.656 "vhost_create_scsi_controller", 00:05:10.656 "thread_set_cpumask", 00:05:10.656 "framework_get_scheduler", 00:05:10.656 "framework_set_scheduler", 00:05:10.656 "framework_get_reactors", 00:05:10.656 "thread_get_io_channels", 00:05:10.656 "thread_get_pollers", 00:05:10.656 "thread_get_stats", 00:05:10.656 "framework_monitor_context_switch", 00:05:10.656 "spdk_kill_instance", 00:05:10.656 "log_enable_timestamps", 00:05:10.656 "log_get_flags", 00:05:10.656 "log_clear_flag", 00:05:10.656 "log_set_flag", 00:05:10.656 "log_get_level", 00:05:10.656 "log_set_level", 00:05:10.656 "log_get_print_level", 00:05:10.656 "log_set_print_level", 00:05:10.656 "framework_enable_cpumask_locks", 00:05:10.656 "framework_disable_cpumask_locks", 00:05:10.656 "framework_wait_init", 00:05:10.656 "framework_start_init", 00:05:10.656 "scsi_get_devices", 00:05:10.656 "bdev_get_histogram", 00:05:10.656 "bdev_enable_histogram", 00:05:10.656 "bdev_set_qos_limit", 00:05:10.656 "bdev_set_qd_sampling_period", 00:05:10.656 "bdev_get_bdevs", 00:05:10.656 "bdev_reset_iostat", 00:05:10.656 "bdev_get_iostat", 00:05:10.656 "bdev_examine", 00:05:10.656 "bdev_wait_for_examine", 00:05:10.656 "bdev_set_options", 00:05:10.656 "notify_get_notifications", 00:05:10.656 "notify_get_types", 00:05:10.656 "accel_get_stats", 00:05:10.656 "accel_set_options", 00:05:10.656 "accel_set_driver", 00:05:10.656 "accel_crypto_key_destroy", 00:05:10.656 "accel_crypto_keys_get", 00:05:10.656 "accel_crypto_key_create", 00:05:10.656 "accel_assign_opc", 00:05:10.656 "accel_get_module_info", 00:05:10.656 "accel_get_opc_assignments", 00:05:10.656 "vmd_rescan", 00:05:10.656 "vmd_remove_device", 00:05:10.656 "vmd_enable", 00:05:10.656 "sock_set_default_impl", 00:05:10.656 "sock_impl_set_options", 00:05:10.656 "sock_impl_get_options", 00:05:10.656 "iobuf_get_stats", 00:05:10.656 "iobuf_set_options", 00:05:10.656 "framework_get_pci_devices", 00:05:10.656 "framework_get_config", 00:05:10.656 "framework_get_subsystems", 00:05:10.656 "vfu_tgt_set_base_path", 00:05:10.656 "trace_get_info", 00:05:10.656 "trace_get_tpoint_group_mask", 00:05:10.656 "trace_disable_tpoint_group", 00:05:10.656 "trace_enable_tpoint_group", 00:05:10.656 "trace_clear_tpoint_mask", 00:05:10.656 "trace_set_tpoint_mask", 00:05:10.656 "spdk_get_version", 00:05:10.656 "rpc_get_methods" 00:05:10.656 ] 00:05:10.656 22:04:05 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:10.656 22:04:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:10.656 22:04:05 -- common/autotest_common.sh@10 -- # set +x 00:05:10.656 22:04:05 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:10.656 22:04:05 -- spdkcli/tcp.sh@38 -- # killprocess 3375119 00:05:10.656 22:04:05 -- common/autotest_common.sh@926 -- # '[' -z 3375119 ']' 00:05:10.656 22:04:05 -- common/autotest_common.sh@930 -- # kill -0 3375119 00:05:10.656 22:04:05 -- common/autotest_common.sh@931 -- # uname 00:05:10.656 22:04:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:10.656 22:04:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3375119 00:05:10.656 22:04:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:10.656 22:04:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:10.656 22:04:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3375119' 00:05:10.656 killing process with pid 3375119 00:05:10.656 22:04:05 -- common/autotest_common.sh@945 -- # kill 3375119 00:05:10.656 22:04:05 -- common/autotest_common.sh@950 -- # wait 3375119 00:05:10.915 00:05:10.915 real 0m1.485s 00:05:10.915 user 0m2.795s 00:05:10.915 sys 0m0.434s 00:05:10.915 22:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.915 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:10.915 ************************************ 00:05:10.915 END TEST spdkcli_tcp 00:05:10.915 ************************************ 00:05:11.174 22:04:06 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.174 22:04:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.174 22:04:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.174 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:11.174 ************************************ 00:05:11.174 START TEST dpdk_mem_utility 00:05:11.174 ************************************ 00:05:11.174 22:04:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.174 * Looking for test storage... 00:05:11.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:11.174 22:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:11.174 22:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3375424 00:05:11.174 22:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3375424 00:05:11.174 22:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.174 22:04:06 -- common/autotest_common.sh@819 -- # '[' -z 3375424 ']' 00:05:11.174 22:04:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.174 22:04:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:11.174 22:04:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.174 22:04:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:11.174 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:11.174 [2024-07-24 22:04:06.194408] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:11.174 [2024-07-24 22:04:06.194458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375424 ] 00:05:11.174 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.174 [2024-07-24 22:04:06.249558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.174 [2024-07-24 22:04:06.287796] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:11.174 [2024-07-24 22:04:06.287922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.114 22:04:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:12.114 22:04:06 -- common/autotest_common.sh@852 -- # return 0 00:05:12.114 22:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:12.114 22:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:12.114 22:04:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:12.114 22:04:06 -- common/autotest_common.sh@10 -- # set +x 00:05:12.114 { 00:05:12.114 "filename": "/tmp/spdk_mem_dump.txt" 00:05:12.114 } 00:05:12.114 22:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:12.114 22:04:06 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:12.114 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:12.114 1 heaps totaling size 814.000000 MiB 00:05:12.114 size: 814.000000 MiB heap id: 0 00:05:12.114 end heaps---------- 00:05:12.114 8 mempools totaling size 598.116089 MiB 00:05:12.114 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:12.114 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:12.114 size: 84.521057 MiB name: bdev_io_3375424 00:05:12.114 size: 51.011292 MiB name: evtpool_3375424 00:05:12.114 size: 50.003479 MiB name: msgpool_3375424 00:05:12.114 size: 21.763794 MiB name: PDU_Pool 00:05:12.114 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:12.114 size: 0.026123 MiB name: Session_Pool 00:05:12.114 end mempools------- 00:05:12.114 6 memzones totaling size 4.142822 MiB 00:05:12.114 size: 1.000366 MiB name: RG_ring_0_3375424 00:05:12.114 size: 1.000366 MiB name: RG_ring_1_3375424 00:05:12.114 size: 1.000366 MiB name: RG_ring_4_3375424 00:05:12.114 size: 1.000366 MiB name: RG_ring_5_3375424 00:05:12.114 size: 0.125366 MiB name: RG_ring_2_3375424 00:05:12.114 size: 0.015991 MiB name: RG_ring_3_3375424 00:05:12.114 end memzones------- 00:05:12.114 22:04:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:12.114 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:12.114 list of free elements. size: 12.519348 MiB 00:05:12.114 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:12.114 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:12.114 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:12.114 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:12.114 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:12.114 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:12.114 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:12.114 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:12.114 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:12.114 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:12.114 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:12.114 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:12.114 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:12.114 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:12.114 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:12.114 list of standard malloc elements. size: 199.218079 MiB 00:05:12.114 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:12.114 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:12.114 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:12.114 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:12.114 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:12.114 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:12.114 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:12.114 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:12.114 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:12.114 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:12.114 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:12.114 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:12.114 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:12.114 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:12.114 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:12.114 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:12.114 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:12.114 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:12.114 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:12.114 list of memzone associated elements. size: 602.262573 MiB 00:05:12.114 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:12.114 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:12.114 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:12.114 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:12.114 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:12.114 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3375424_0 00:05:12.114 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:12.114 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3375424_0 00:05:12.114 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:12.114 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3375424_0 00:05:12.114 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:12.114 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:12.114 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:12.114 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:12.114 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:12.114 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3375424 00:05:12.114 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:12.114 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3375424 00:05:12.115 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:12.115 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3375424 00:05:12.115 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:12.115 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:12.115 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:12.115 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:12.115 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:12.115 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:12.115 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:12.115 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:12.115 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:12.115 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3375424 00:05:12.115 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:12.115 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3375424 00:05:12.115 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:12.115 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3375424 00:05:12.115 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:12.115 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3375424 00:05:12.115 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:12.115 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3375424 00:05:12.115 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:12.115 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:12.115 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:12.115 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:12.115 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:12.115 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:12.115 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:12.115 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3375424 00:05:12.115 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:12.115 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:12.115 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:12.115 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:12.115 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:12.115 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3375424 00:05:12.115 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:12.115 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:12.115 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:12.115 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3375424 00:05:12.115 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:12.115 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3375424 00:05:12.115 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:12.115 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:12.115 22:04:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:12.115 22:04:07 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3375424 00:05:12.115 22:04:07 -- common/autotest_common.sh@926 -- # '[' -z 3375424 ']' 00:05:12.115 22:04:07 -- common/autotest_common.sh@930 -- # kill -0 3375424 00:05:12.115 22:04:07 -- common/autotest_common.sh@931 -- # uname 00:05:12.115 22:04:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:12.115 22:04:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3375424 00:05:12.115 22:04:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:12.115 22:04:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:12.115 22:04:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3375424' 00:05:12.115 killing process with pid 3375424 00:05:12.115 22:04:07 -- common/autotest_common.sh@945 -- # kill 3375424 00:05:12.115 22:04:07 -- common/autotest_common.sh@950 -- # wait 3375424 00:05:12.374 00:05:12.374 real 0m1.366s 00:05:12.374 user 0m1.429s 00:05:12.374 sys 0m0.398s 00:05:12.375 22:04:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.375 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:05:12.375 ************************************ 00:05:12.375 END TEST dpdk_mem_utility 00:05:12.375 ************************************ 00:05:12.375 22:04:07 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:12.375 22:04:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:12.375 22:04:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.375 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:05:12.375 ************************************ 00:05:12.375 START TEST event 00:05:12.375 ************************************ 00:05:12.375 22:04:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:12.634 * Looking for test storage... 00:05:12.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:12.634 22:04:07 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:12.634 22:04:07 -- bdev/nbd_common.sh@6 -- # set -e 00:05:12.634 22:04:07 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.634 22:04:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:12.634 22:04:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:12.634 22:04:07 -- common/autotest_common.sh@10 -- # set +x 00:05:12.634 ************************************ 00:05:12.634 START TEST event_perf 00:05:12.634 ************************************ 00:05:12.634 22:04:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.634 Running I/O for 1 seconds...[2024-07-24 22:04:07.542102] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:12.634 [2024-07-24 22:04:07.542157] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375714 ] 00:05:12.634 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.634 [2024-07-24 22:04:07.596554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.634 [2024-07-24 22:04:07.637359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.634 [2024-07-24 22:04:07.637455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.634 [2024-07-24 22:04:07.637551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.634 [2024-07-24 22:04:07.637553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.571 Running I/O for 1 seconds... 00:05:13.571 lcore 0: 203826 00:05:13.571 lcore 1: 203825 00:05:13.571 lcore 2: 203827 00:05:13.571 lcore 3: 203827 00:05:13.571 done. 00:05:13.571 00:05:13.571 real 0m1.168s 00:05:13.571 user 0m4.101s 00:05:13.571 sys 0m0.064s 00:05:13.571 22:04:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.571 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:05:13.571 ************************************ 00:05:13.571 END TEST event_perf 00:05:13.571 ************************************ 00:05:13.831 22:04:08 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.831 22:04:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:13.831 22:04:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.831 22:04:08 -- common/autotest_common.sh@10 -- # set +x 00:05:13.831 ************************************ 00:05:13.831 START TEST event_reactor 00:05:13.831 ************************************ 00:05:13.831 22:04:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:13.831 [2024-07-24 22:04:08.741794] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:13.831 [2024-07-24 22:04:08.741840] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3375967 ] 00:05:13.831 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.831 [2024-07-24 22:04:08.794048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.831 [2024-07-24 22:04:08.830550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.765 test_start 00:05:14.765 oneshot 00:05:14.765 tick 100 00:05:14.765 tick 100 00:05:14.765 tick 250 00:05:14.765 tick 100 00:05:14.765 tick 100 00:05:14.765 tick 250 00:05:14.765 tick 500 00:05:14.765 tick 100 00:05:14.765 tick 100 00:05:14.765 tick 100 00:05:14.765 tick 250 00:05:14.765 tick 100 00:05:14.765 tick 100 00:05:14.765 test_end 00:05:14.765 00:05:14.765 real 0m1.155s 00:05:14.765 user 0m1.083s 00:05:14.765 sys 0m0.068s 00:05:14.765 22:04:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.765 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:14.765 ************************************ 00:05:14.765 END TEST event_reactor 00:05:14.765 ************************************ 00:05:15.025 22:04:09 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.025 22:04:09 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:15.025 22:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.025 22:04:09 -- common/autotest_common.sh@10 -- # set +x 00:05:15.025 ************************************ 00:05:15.025 START TEST event_reactor_perf 00:05:15.025 ************************************ 00:05:15.025 22:04:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:15.025 [2024-07-24 22:04:09.941609] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:15.025 [2024-07-24 22:04:09.941688] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376221 ] 00:05:15.025 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.025 [2024-07-24 22:04:09.997876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.025 [2024-07-24 22:04:10.037573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.964 test_start 00:05:15.964 test_end 00:05:15.964 Performance: 488901 events per second 00:05:15.964 00:05:15.964 real 0m1.174s 00:05:15.964 user 0m1.103s 00:05:15.964 sys 0m0.066s 00:05:15.964 22:04:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.964 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:15.964 ************************************ 00:05:15.964 END TEST event_reactor_perf 00:05:15.964 ************************************ 00:05:16.224 22:04:11 -- event/event.sh@49 -- # uname -s 00:05:16.224 22:04:11 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:16.224 22:04:11 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:16.224 22:04:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.224 22:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.224 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.224 ************************************ 00:05:16.224 START TEST event_scheduler 00:05:16.224 ************************************ 00:05:16.224 22:04:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:16.224 * Looking for test storage... 00:05:16.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:16.224 22:04:11 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:16.224 22:04:11 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3376496 00:05:16.224 22:04:11 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.224 22:04:11 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:16.224 22:04:11 -- scheduler/scheduler.sh@37 -- # waitforlisten 3376496 00:05:16.224 22:04:11 -- common/autotest_common.sh@819 -- # '[' -z 3376496 ']' 00:05:16.224 22:04:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.224 22:04:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:16.224 22:04:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.224 22:04:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:16.224 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.224 [2024-07-24 22:04:11.256366] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:16.224 [2024-07-24 22:04:11.256417] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3376496 ] 00:05:16.224 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.224 [2024-07-24 22:04:11.306739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.224 [2024-07-24 22:04:11.348198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.224 [2024-07-24 22:04:11.348287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.224 [2024-07-24 22:04:11.348371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.224 [2024-07-24 22:04:11.348373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.484 22:04:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:16.484 22:04:11 -- common/autotest_common.sh@852 -- # return 0 00:05:16.484 22:04:11 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:16.484 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.484 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.484 POWER: Env isn't set yet! 00:05:16.484 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:16.484 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.484 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.484 POWER: Attempting to initialise PSTAT power management... 00:05:16.484 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:16.484 POWER: Initialized successfully for lcore 0 power management 00:05:16.484 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:16.484 POWER: Initialized successfully for lcore 1 power management 00:05:16.484 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:16.484 POWER: Initialized successfully for lcore 2 power management 00:05:16.484 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:16.484 POWER: Initialized successfully for lcore 3 power management 00:05:16.484 [2024-07-24 22:04:11.426198] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:16.484 [2024-07-24 22:04:11.426211] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:16.484 [2024-07-24 22:04:11.426218] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:16.484 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.484 22:04:11 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:16.484 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.484 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.484 [2024-07-24 22:04:11.489260] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:16.485 22:04:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.485 22:04:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 ************************************ 00:05:16.485 START TEST scheduler_create_thread 00:05:16.485 ************************************ 00:05:16.485 22:04:11 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 2 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 3 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 4 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 5 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 6 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 7 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 8 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 9 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 10 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.485 22:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:16.485 22:04:11 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:16.485 22:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:16.485 22:04:11 -- common/autotest_common.sh@10 -- # set +x 00:05:17.425 22:04:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:17.425 22:04:12 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:17.425 22:04:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:17.425 22:04:12 -- common/autotest_common.sh@10 -- # set +x 00:05:18.806 22:04:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:18.806 22:04:13 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:18.806 22:04:13 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:18.806 22:04:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:18.806 22:04:13 -- common/autotest_common.sh@10 -- # set +x 00:05:19.744 22:04:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:19.744 00:05:19.744 real 0m3.380s 00:05:19.744 user 0m0.024s 00:05:19.744 sys 0m0.002s 00:05:19.744 22:04:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.744 22:04:14 -- common/autotest_common.sh@10 -- # set +x 00:05:19.744 ************************************ 00:05:19.744 END TEST scheduler_create_thread 00:05:19.744 ************************************ 00:05:20.004 22:04:14 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:20.004 22:04:14 -- scheduler/scheduler.sh@46 -- # killprocess 3376496 00:05:20.004 22:04:14 -- common/autotest_common.sh@926 -- # '[' -z 3376496 ']' 00:05:20.004 22:04:14 -- common/autotest_common.sh@930 -- # kill -0 3376496 00:05:20.004 22:04:14 -- common/autotest_common.sh@931 -- # uname 00:05:20.004 22:04:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:20.004 22:04:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3376496 00:05:20.004 22:04:14 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:20.004 22:04:14 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:20.004 22:04:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3376496' 00:05:20.004 killing process with pid 3376496 00:05:20.004 22:04:14 -- common/autotest_common.sh@945 -- # kill 3376496 00:05:20.004 22:04:14 -- common/autotest_common.sh@950 -- # wait 3376496 00:05:20.264 [2024-07-24 22:04:15.257160] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:20.525 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:20.525 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:20.525 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:20.525 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:20.525 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:20.525 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:20.525 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:20.525 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:20.525 00:05:20.525 real 0m4.340s 00:05:20.525 user 0m7.701s 00:05:20.525 sys 0m0.281s 00:05:20.525 22:04:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.525 22:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:20.525 ************************************ 00:05:20.525 END TEST event_scheduler 00:05:20.525 ************************************ 00:05:20.525 22:04:15 -- event/event.sh@51 -- # modprobe -n nbd 00:05:20.525 22:04:15 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:20.525 22:04:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.525 22:04:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.525 22:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:20.525 ************************************ 00:05:20.525 START TEST app_repeat 00:05:20.525 ************************************ 00:05:20.525 22:04:15 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:20.525 22:04:15 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.525 22:04:15 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.525 22:04:15 -- event/event.sh@13 -- # local nbd_list 00:05:20.525 22:04:15 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.525 22:04:15 -- event/event.sh@14 -- # local bdev_list 00:05:20.525 22:04:15 -- event/event.sh@15 -- # local repeat_times=4 00:05:20.525 22:04:15 -- event/event.sh@17 -- # modprobe nbd 00:05:20.525 22:04:15 -- event/event.sh@19 -- # repeat_pid=3377244 00:05:20.525 22:04:15 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.525 22:04:15 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:20.525 22:04:15 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3377244' 00:05:20.525 Process app_repeat pid: 3377244 00:05:20.525 22:04:15 -- event/event.sh@23 -- # for i in {0..2} 00:05:20.525 22:04:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:20.525 spdk_app_start Round 0 00:05:20.525 22:04:15 -- event/event.sh@25 -- # waitforlisten 3377244 /var/tmp/spdk-nbd.sock 00:05:20.525 22:04:15 -- common/autotest_common.sh@819 -- # '[' -z 3377244 ']' 00:05:20.525 22:04:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.525 22:04:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:20.525 22:04:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.525 22:04:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:20.525 22:04:15 -- common/autotest_common.sh@10 -- # set +x 00:05:20.525 [2024-07-24 22:04:15.555440] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:20.525 [2024-07-24 22:04:15.555521] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377244 ] 00:05:20.525 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.525 [2024-07-24 22:04:15.612396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.525 [2024-07-24 22:04:15.650016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.525 [2024-07-24 22:04:15.650018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.465 22:04:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.465 22:04:16 -- common/autotest_common.sh@852 -- # return 0 00:05:21.465 22:04:16 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.465 Malloc0 00:05:21.465 22:04:16 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.725 Malloc1 00:05:21.725 22:04:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@12 -- # local i 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.725 22:04:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.985 /dev/nbd0 00:05:21.985 22:04:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.985 22:04:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.985 22:04:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:21.985 22:04:16 -- common/autotest_common.sh@857 -- # local i 00:05:21.985 22:04:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:21.985 22:04:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:21.985 22:04:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:21.985 22:04:16 -- common/autotest_common.sh@861 -- # break 00:05:21.985 22:04:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:21.985 22:04:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:21.985 22:04:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.985 1+0 records in 00:05:21.985 1+0 records out 00:05:21.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186971 s, 21.9 MB/s 00:05:21.985 22:04:16 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.985 22:04:16 -- common/autotest_common.sh@874 -- # size=4096 00:05:21.985 22:04:16 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:21.985 22:04:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:21.985 22:04:16 -- common/autotest_common.sh@877 -- # return 0 00:05:21.985 22:04:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.985 22:04:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.985 22:04:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.985 /dev/nbd1 00:05:21.985 22:04:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.985 22:04:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.985 22:04:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:21.985 22:04:17 -- common/autotest_common.sh@857 -- # local i 00:05:21.985 22:04:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:21.985 22:04:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:21.985 22:04:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:21.985 22:04:17 -- common/autotest_common.sh@861 -- # break 00:05:21.985 22:04:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:21.985 22:04:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:21.985 22:04:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.985 1+0 records in 00:05:21.985 1+0 records out 00:05:21.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230983 s, 17.7 MB/s 00:05:22.244 22:04:17 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.244 22:04:17 -- common/autotest_common.sh@874 -- # size=4096 00:05:22.244 22:04:17 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.244 22:04:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:22.244 22:04:17 -- common/autotest_common.sh@877 -- # return 0 00:05:22.244 22:04:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.244 22:04:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.244 22:04:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.244 22:04:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.244 22:04:17 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.244 22:04:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.244 { 00:05:22.244 "nbd_device": "/dev/nbd0", 00:05:22.244 "bdev_name": "Malloc0" 00:05:22.244 }, 00:05:22.244 { 00:05:22.245 "nbd_device": "/dev/nbd1", 00:05:22.245 "bdev_name": "Malloc1" 00:05:22.245 } 00:05:22.245 ]' 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.245 { 00:05:22.245 "nbd_device": "/dev/nbd0", 00:05:22.245 "bdev_name": "Malloc0" 00:05:22.245 }, 00:05:22.245 { 00:05:22.245 "nbd_device": "/dev/nbd1", 00:05:22.245 "bdev_name": "Malloc1" 00:05:22.245 } 00:05:22.245 ]' 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.245 /dev/nbd1' 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.245 /dev/nbd1' 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.245 256+0 records in 00:05:22.245 256+0 records out 00:05:22.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103367 s, 101 MB/s 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.245 22:04:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.504 256+0 records in 00:05:22.504 256+0 records out 00:05:22.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134809 s, 77.8 MB/s 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.504 256+0 records in 00:05:22.504 256+0 records out 00:05:22.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150551 s, 69.6 MB/s 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@51 -- # local i 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@41 -- # break 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.504 22:04:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@41 -- # break 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.764 22:04:17 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@65 -- # true 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.023 22:04:17 -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.023 22:04:17 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.314 22:04:18 -- event/event.sh@35 -- # sleep 3 00:05:23.314 [2024-07-24 22:04:18.344753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.314 [2024-07-24 22:04:18.378953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.314 [2024-07-24 22:04:18.378955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.314 [2024-07-24 22:04:18.419695] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.314 [2024-07-24 22:04:18.419734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:26.605 22:04:21 -- event/event.sh@23 -- # for i in {0..2} 00:05:26.605 22:04:21 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:26.605 spdk_app_start Round 1 00:05:26.605 22:04:21 -- event/event.sh@25 -- # waitforlisten 3377244 /var/tmp/spdk-nbd.sock 00:05:26.605 22:04:21 -- common/autotest_common.sh@819 -- # '[' -z 3377244 ']' 00:05:26.605 22:04:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.605 22:04:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.605 22:04:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.605 22:04:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.605 22:04:21 -- common/autotest_common.sh@10 -- # set +x 00:05:26.605 22:04:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.605 22:04:21 -- common/autotest_common.sh@852 -- # return 0 00:05:26.605 22:04:21 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.605 Malloc0 00:05:26.605 22:04:21 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.605 Malloc1 00:05:26.605 22:04:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@12 -- # local i 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.605 22:04:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.865 /dev/nbd0 00:05:26.865 22:04:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.865 22:04:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.865 22:04:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:26.865 22:04:21 -- common/autotest_common.sh@857 -- # local i 00:05:26.865 22:04:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:26.865 22:04:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:26.865 22:04:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:26.865 22:04:21 -- common/autotest_common.sh@861 -- # break 00:05:26.865 22:04:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:26.865 22:04:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:26.865 22:04:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.865 1+0 records in 00:05:26.865 1+0 records out 00:05:26.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191083 s, 21.4 MB/s 00:05:26.865 22:04:21 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.865 22:04:21 -- common/autotest_common.sh@874 -- # size=4096 00:05:26.865 22:04:21 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:26.865 22:04:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:26.865 22:04:21 -- common/autotest_common.sh@877 -- # return 0 00:05:26.865 22:04:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.865 22:04:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.865 22:04:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:27.125 /dev/nbd1 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:27.125 22:04:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:27.125 22:04:22 -- common/autotest_common.sh@857 -- # local i 00:05:27.125 22:04:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:27.125 22:04:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:27.125 22:04:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:27.125 22:04:22 -- common/autotest_common.sh@861 -- # break 00:05:27.125 22:04:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:27.125 22:04:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:27.125 22:04:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.125 1+0 records in 00:05:27.125 1+0 records out 00:05:27.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200641 s, 20.4 MB/s 00:05:27.125 22:04:22 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.125 22:04:22 -- common/autotest_common.sh@874 -- # size=4096 00:05:27.125 22:04:22 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.125 22:04:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:27.125 22:04:22 -- common/autotest_common.sh@877 -- # return 0 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.125 { 00:05:27.125 "nbd_device": "/dev/nbd0", 00:05:27.125 "bdev_name": "Malloc0" 00:05:27.125 }, 00:05:27.125 { 00:05:27.125 "nbd_device": "/dev/nbd1", 00:05:27.125 "bdev_name": "Malloc1" 00:05:27.125 } 00:05:27.125 ]' 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.125 { 00:05:27.125 "nbd_device": "/dev/nbd0", 00:05:27.125 "bdev_name": "Malloc0" 00:05:27.125 }, 00:05:27.125 { 00:05:27.125 "nbd_device": "/dev/nbd1", 00:05:27.125 "bdev_name": "Malloc1" 00:05:27.125 } 00:05:27.125 ]' 00:05:27.125 22:04:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.386 /dev/nbd1' 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.386 /dev/nbd1' 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.386 256+0 records in 00:05:27.386 256+0 records out 00:05:27.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103212 s, 102 MB/s 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.386 256+0 records in 00:05:27.386 256+0 records out 00:05:27.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144058 s, 72.8 MB/s 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.386 256+0 records in 00:05:27.386 256+0 records out 00:05:27.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139924 s, 74.9 MB/s 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@51 -- # local i 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.386 22:04:22 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@41 -- # break 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@41 -- # break 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.645 22:04:22 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@65 -- # true 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.905 22:04:22 -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.905 22:04:22 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.164 22:04:23 -- event/event.sh@35 -- # sleep 3 00:05:28.164 [2024-07-24 22:04:23.278000] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.422 [2024-07-24 22:04:23.313114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.422 [2024-07-24 22:04:23.313116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.422 [2024-07-24 22:04:23.354546] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.422 [2024-07-24 22:04:23.354588] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.715 22:04:26 -- event/event.sh@23 -- # for i in {0..2} 00:05:31.715 22:04:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:31.715 spdk_app_start Round 2 00:05:31.715 22:04:26 -- event/event.sh@25 -- # waitforlisten 3377244 /var/tmp/spdk-nbd.sock 00:05:31.715 22:04:26 -- common/autotest_common.sh@819 -- # '[' -z 3377244 ']' 00:05:31.715 22:04:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.715 22:04:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.715 22:04:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.715 22:04:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.715 22:04:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.715 22:04:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:31.715 22:04:26 -- common/autotest_common.sh@852 -- # return 0 00:05:31.715 22:04:26 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.715 Malloc0 00:05:31.715 22:04:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.715 Malloc1 00:05:31.715 22:04:26 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.715 22:04:26 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@12 -- # local i 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.716 /dev/nbd0 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.716 22:04:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:31.716 22:04:26 -- common/autotest_common.sh@857 -- # local i 00:05:31.716 22:04:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:31.716 22:04:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:31.716 22:04:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:31.716 22:04:26 -- common/autotest_common.sh@861 -- # break 00:05:31.716 22:04:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:31.716 22:04:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:31.716 22:04:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.716 1+0 records in 00:05:31.716 1+0 records out 00:05:31.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199171 s, 20.6 MB/s 00:05:31.716 22:04:26 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.716 22:04:26 -- common/autotest_common.sh@874 -- # size=4096 00:05:31.716 22:04:26 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.716 22:04:26 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:31.716 22:04:26 -- common/autotest_common.sh@877 -- # return 0 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.716 22:04:26 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.976 /dev/nbd1 00:05:31.976 22:04:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.976 22:04:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.976 22:04:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:31.976 22:04:27 -- common/autotest_common.sh@857 -- # local i 00:05:31.976 22:04:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:31.976 22:04:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:31.976 22:04:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:31.976 22:04:27 -- common/autotest_common.sh@861 -- # break 00:05:31.976 22:04:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:31.976 22:04:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:31.976 22:04:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.976 1+0 records in 00:05:31.976 1+0 records out 00:05:31.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231913 s, 17.7 MB/s 00:05:31.976 22:04:27 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.976 22:04:27 -- common/autotest_common.sh@874 -- # size=4096 00:05:31.976 22:04:27 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:31.976 22:04:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:31.976 22:04:27 -- common/autotest_common.sh@877 -- # return 0 00:05:31.976 22:04:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.976 22:04:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.976 22:04:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.976 22:04:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.976 22:04:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.236 { 00:05:32.236 "nbd_device": "/dev/nbd0", 00:05:32.236 "bdev_name": "Malloc0" 00:05:32.236 }, 00:05:32.236 { 00:05:32.236 "nbd_device": "/dev/nbd1", 00:05:32.236 "bdev_name": "Malloc1" 00:05:32.236 } 00:05:32.236 ]' 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.236 { 00:05:32.236 "nbd_device": "/dev/nbd0", 00:05:32.236 "bdev_name": "Malloc0" 00:05:32.236 }, 00:05:32.236 { 00:05:32.236 "nbd_device": "/dev/nbd1", 00:05:32.236 "bdev_name": "Malloc1" 00:05:32.236 } 00:05:32.236 ]' 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.236 /dev/nbd1' 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.236 /dev/nbd1' 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.236 256+0 records in 00:05:32.236 256+0 records out 00:05:32.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00966785 s, 108 MB/s 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.236 256+0 records in 00:05:32.236 256+0 records out 00:05:32.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0133704 s, 78.4 MB/s 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.236 22:04:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.236 256+0 records in 00:05:32.236 256+0 records out 00:05:32.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142936 s, 73.4 MB/s 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@51 -- # local i 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.237 22:04:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@41 -- # break 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.497 22:04:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@41 -- # break 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.757 22:04:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.017 22:04:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.017 22:04:27 -- bdev/nbd_common.sh@65 -- # true 00:05:33.017 22:04:27 -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.017 22:04:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.017 22:04:27 -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.017 22:04:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.017 22:04:27 -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.017 22:04:27 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:33.017 22:04:28 -- event/event.sh@35 -- # sleep 3 00:05:33.277 [2024-07-24 22:04:28.256484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.277 [2024-07-24 22:04:28.291078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.277 [2024-07-24 22:04:28.291081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.277 [2024-07-24 22:04:28.332451] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.277 [2024-07-24 22:04:28.332492] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:36.573 22:04:31 -- event/event.sh@38 -- # waitforlisten 3377244 /var/tmp/spdk-nbd.sock 00:05:36.573 22:04:31 -- common/autotest_common.sh@819 -- # '[' -z 3377244 ']' 00:05:36.573 22:04:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:36.573 22:04:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.573 22:04:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:36.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:36.573 22:04:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.573 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:36.573 22:04:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:36.573 22:04:31 -- common/autotest_common.sh@852 -- # return 0 00:05:36.573 22:04:31 -- event/event.sh@39 -- # killprocess 3377244 00:05:36.573 22:04:31 -- common/autotest_common.sh@926 -- # '[' -z 3377244 ']' 00:05:36.573 22:04:31 -- common/autotest_common.sh@930 -- # kill -0 3377244 00:05:36.573 22:04:31 -- common/autotest_common.sh@931 -- # uname 00:05:36.573 22:04:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:36.573 22:04:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3377244 00:05:36.573 22:04:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:36.573 22:04:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:36.573 22:04:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3377244' 00:05:36.573 killing process with pid 3377244 00:05:36.573 22:04:31 -- common/autotest_common.sh@945 -- # kill 3377244 00:05:36.573 22:04:31 -- common/autotest_common.sh@950 -- # wait 3377244 00:05:36.573 spdk_app_start is called in Round 0. 00:05:36.573 Shutdown signal received, stop current app iteration 00:05:36.573 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:05:36.573 spdk_app_start is called in Round 1. 00:05:36.573 Shutdown signal received, stop current app iteration 00:05:36.573 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:05:36.573 spdk_app_start is called in Round 2. 00:05:36.573 Shutdown signal received, stop current app iteration 00:05:36.573 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 reinitialization... 00:05:36.573 spdk_app_start is called in Round 3. 00:05:36.573 Shutdown signal received, stop current app iteration 00:05:36.573 22:04:31 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:36.573 22:04:31 -- event/event.sh@42 -- # return 0 00:05:36.573 00:05:36.573 real 0m15.941s 00:05:36.573 user 0m34.783s 00:05:36.573 sys 0m2.208s 00:05:36.573 22:04:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.573 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:36.573 ************************************ 00:05:36.573 END TEST app_repeat 00:05:36.573 ************************************ 00:05:36.573 22:04:31 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:36.573 22:04:31 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:36.573 22:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.573 22:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.573 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:36.573 ************************************ 00:05:36.573 START TEST cpu_locks 00:05:36.573 ************************************ 00:05:36.573 22:04:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:36.573 * Looking for test storage... 00:05:36.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:36.573 22:04:31 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:36.574 22:04:31 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:36.574 22:04:31 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:36.574 22:04:31 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:36.574 22:04:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.574 22:04:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.574 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:36.574 ************************************ 00:05:36.574 START TEST default_locks 00:05:36.574 ************************************ 00:05:36.574 22:04:31 -- common/autotest_common.sh@1104 -- # default_locks 00:05:36.574 22:04:31 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3380174 00:05:36.574 22:04:31 -- event/cpu_locks.sh@47 -- # waitforlisten 3380174 00:05:36.574 22:04:31 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.574 22:04:31 -- common/autotest_common.sh@819 -- # '[' -z 3380174 ']' 00:05:36.574 22:04:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.574 22:04:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:36.574 22:04:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.574 22:04:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:36.574 22:04:31 -- common/autotest_common.sh@10 -- # set +x 00:05:36.574 [2024-07-24 22:04:31.640113] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:36.574 [2024-07-24 22:04:31.640165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380174 ] 00:05:36.574 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.574 [2024-07-24 22:04:31.695108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.833 [2024-07-24 22:04:31.734041] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:36.833 [2024-07-24 22:04:31.734192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.403 22:04:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:37.403 22:04:32 -- common/autotest_common.sh@852 -- # return 0 00:05:37.404 22:04:32 -- event/cpu_locks.sh@49 -- # locks_exist 3380174 00:05:37.404 22:04:32 -- event/cpu_locks.sh@22 -- # lslocks -p 3380174 00:05:37.404 22:04:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.664 lslocks: write error 00:05:37.664 22:04:32 -- event/cpu_locks.sh@50 -- # killprocess 3380174 00:05:37.664 22:04:32 -- common/autotest_common.sh@926 -- # '[' -z 3380174 ']' 00:05:37.664 22:04:32 -- common/autotest_common.sh@930 -- # kill -0 3380174 00:05:37.664 22:04:32 -- common/autotest_common.sh@931 -- # uname 00:05:37.664 22:04:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:37.664 22:04:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3380174 00:05:37.664 22:04:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:37.664 22:04:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:37.664 22:04:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3380174' 00:05:37.664 killing process with pid 3380174 00:05:37.664 22:04:32 -- common/autotest_common.sh@945 -- # kill 3380174 00:05:37.664 22:04:32 -- common/autotest_common.sh@950 -- # wait 3380174 00:05:38.234 22:04:33 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3380174 00:05:38.234 22:04:33 -- common/autotest_common.sh@640 -- # local es=0 00:05:38.234 22:04:33 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3380174 00:05:38.234 22:04:33 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:38.234 22:04:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:38.234 22:04:33 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:38.235 22:04:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:38.235 22:04:33 -- common/autotest_common.sh@643 -- # waitforlisten 3380174 00:05:38.235 22:04:33 -- common/autotest_common.sh@819 -- # '[' -z 3380174 ']' 00:05:38.235 22:04:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.235 22:04:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.235 22:04:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.235 22:04:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.235 22:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:38.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3380174) - No such process 00:05:38.235 ERROR: process (pid: 3380174) is no longer running 00:05:38.235 22:04:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.235 22:04:33 -- common/autotest_common.sh@852 -- # return 1 00:05:38.235 22:04:33 -- common/autotest_common.sh@643 -- # es=1 00:05:38.235 22:04:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:38.235 22:04:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:38.235 22:04:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:38.235 22:04:33 -- event/cpu_locks.sh@54 -- # no_locks 00:05:38.235 22:04:33 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.235 22:04:33 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.235 22:04:33 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.235 00:05:38.235 real 0m1.494s 00:05:38.235 user 0m1.568s 00:05:38.235 sys 0m0.488s 00:05:38.235 22:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.235 22:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:38.235 ************************************ 00:05:38.235 END TEST default_locks 00:05:38.235 ************************************ 00:05:38.235 22:04:33 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:38.235 22:04:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.235 22:04:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.235 22:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:38.235 ************************************ 00:05:38.235 START TEST default_locks_via_rpc 00:05:38.235 ************************************ 00:05:38.235 22:04:33 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:38.235 22:04:33 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3380535 00:05:38.235 22:04:33 -- event/cpu_locks.sh@63 -- # waitforlisten 3380535 00:05:38.235 22:04:33 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.235 22:04:33 -- common/autotest_common.sh@819 -- # '[' -z 3380535 ']' 00:05:38.235 22:04:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.235 22:04:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:38.235 22:04:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.235 22:04:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:38.235 22:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:38.235 [2024-07-24 22:04:33.165283] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:38.235 [2024-07-24 22:04:33.165331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380535 ] 00:05:38.235 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.235 [2024-07-24 22:04:33.218339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.235 [2024-07-24 22:04:33.257271] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:38.235 [2024-07-24 22:04:33.257407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.173 22:04:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:39.173 22:04:33 -- common/autotest_common.sh@852 -- # return 0 00:05:39.173 22:04:33 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.173 22:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.173 22:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:39.173 22:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.173 22:04:33 -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.173 22:04:33 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.173 22:04:33 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.173 22:04:33 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.173 22:04:33 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.173 22:04:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:39.173 22:04:33 -- common/autotest_common.sh@10 -- # set +x 00:05:39.173 22:04:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:39.173 22:04:33 -- event/cpu_locks.sh@71 -- # locks_exist 3380535 00:05:39.173 22:04:33 -- event/cpu_locks.sh@22 -- # lslocks -p 3380535 00:05:39.173 22:04:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.173 22:04:34 -- event/cpu_locks.sh@73 -- # killprocess 3380535 00:05:39.173 22:04:34 -- common/autotest_common.sh@926 -- # '[' -z 3380535 ']' 00:05:39.173 22:04:34 -- common/autotest_common.sh@930 -- # kill -0 3380535 00:05:39.173 22:04:34 -- common/autotest_common.sh@931 -- # uname 00:05:39.173 22:04:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:39.173 22:04:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3380535 00:05:39.173 22:04:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:39.173 22:04:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:39.173 22:04:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3380535' 00:05:39.173 killing process with pid 3380535 00:05:39.173 22:04:34 -- common/autotest_common.sh@945 -- # kill 3380535 00:05:39.173 22:04:34 -- common/autotest_common.sh@950 -- # wait 3380535 00:05:39.434 00:05:39.434 real 0m1.398s 00:05:39.434 user 0m1.472s 00:05:39.434 sys 0m0.435s 00:05:39.434 22:04:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.434 22:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:39.434 ************************************ 00:05:39.434 END TEST default_locks_via_rpc 00:05:39.434 ************************************ 00:05:39.434 22:04:34 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.434 22:04:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.434 22:04:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.434 22:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:39.434 ************************************ 00:05:39.434 START TEST non_locking_app_on_locked_coremask 00:05:39.434 ************************************ 00:05:39.434 22:04:34 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:39.434 22:04:34 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3380802 00:05:39.434 22:04:34 -- event/cpu_locks.sh@81 -- # waitforlisten 3380802 /var/tmp/spdk.sock 00:05:39.434 22:04:34 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.434 22:04:34 -- common/autotest_common.sh@819 -- # '[' -z 3380802 ']' 00:05:39.434 22:04:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.434 22:04:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.434 22:04:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.434 22:04:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.434 22:04:34 -- common/autotest_common.sh@10 -- # set +x 00:05:39.694 [2024-07-24 22:04:34.598885] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:39.694 [2024-07-24 22:04:34.598933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380802 ] 00:05:39.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.694 [2024-07-24 22:04:34.651720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.694 [2024-07-24 22:04:34.690448] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.694 [2024-07-24 22:04:34.690565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.264 22:04:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.264 22:04:35 -- common/autotest_common.sh@852 -- # return 0 00:05:40.264 22:04:35 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.264 22:04:35 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3380822 00:05:40.264 22:04:35 -- event/cpu_locks.sh@85 -- # waitforlisten 3380822 /var/tmp/spdk2.sock 00:05:40.264 22:04:35 -- common/autotest_common.sh@819 -- # '[' -z 3380822 ']' 00:05:40.264 22:04:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.264 22:04:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.264 22:04:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.265 22:04:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.265 22:04:35 -- common/autotest_common.sh@10 -- # set +x 00:05:40.525 [2024-07-24 22:04:35.416029] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:40.525 [2024-07-24 22:04:35.416077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3380822 ] 00:05:40.525 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.525 [2024-07-24 22:04:35.491145] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.525 [2024-07-24 22:04:35.491166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.525 [2024-07-24 22:04:35.564596] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.525 [2024-07-24 22:04:35.564713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.093 22:04:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.093 22:04:36 -- common/autotest_common.sh@852 -- # return 0 00:05:41.093 22:04:36 -- event/cpu_locks.sh@87 -- # locks_exist 3380802 00:05:41.093 22:04:36 -- event/cpu_locks.sh@22 -- # lslocks -p 3380802 00:05:41.093 22:04:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.071 lslocks: write error 00:05:42.071 22:04:36 -- event/cpu_locks.sh@89 -- # killprocess 3380802 00:05:42.071 22:04:36 -- common/autotest_common.sh@926 -- # '[' -z 3380802 ']' 00:05:42.071 22:04:36 -- common/autotest_common.sh@930 -- # kill -0 3380802 00:05:42.071 22:04:36 -- common/autotest_common.sh@931 -- # uname 00:05:42.071 22:04:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.071 22:04:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3380802 00:05:42.071 22:04:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.071 22:04:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.071 22:04:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3380802' 00:05:42.071 killing process with pid 3380802 00:05:42.071 22:04:36 -- common/autotest_common.sh@945 -- # kill 3380802 00:05:42.071 22:04:36 -- common/autotest_common.sh@950 -- # wait 3380802 00:05:42.332 22:04:37 -- event/cpu_locks.sh@90 -- # killprocess 3380822 00:05:42.332 22:04:37 -- common/autotest_common.sh@926 -- # '[' -z 3380822 ']' 00:05:42.332 22:04:37 -- common/autotest_common.sh@930 -- # kill -0 3380822 00:05:42.332 22:04:37 -- common/autotest_common.sh@931 -- # uname 00:05:42.332 22:04:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.332 22:04:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3380822 00:05:42.592 22:04:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.592 22:04:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.592 22:04:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3380822' 00:05:42.592 killing process with pid 3380822 00:05:42.592 22:04:37 -- common/autotest_common.sh@945 -- # kill 3380822 00:05:42.592 22:04:37 -- common/autotest_common.sh@950 -- # wait 3380822 00:05:42.853 00:05:42.853 real 0m3.225s 00:05:42.853 user 0m3.438s 00:05:42.853 sys 0m0.922s 00:05:42.853 22:04:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.853 22:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.853 ************************************ 00:05:42.853 END TEST non_locking_app_on_locked_coremask 00:05:42.853 ************************************ 00:05:42.853 22:04:37 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:42.853 22:04:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.853 22:04:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.853 22:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.853 ************************************ 00:05:42.853 START TEST locking_app_on_unlocked_coremask 00:05:42.853 ************************************ 00:05:42.853 22:04:37 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:42.853 22:04:37 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3381317 00:05:42.853 22:04:37 -- event/cpu_locks.sh@99 -- # waitforlisten 3381317 /var/tmp/spdk.sock 00:05:42.853 22:04:37 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:42.853 22:04:37 -- common/autotest_common.sh@819 -- # '[' -z 3381317 ']' 00:05:42.853 22:04:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.853 22:04:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.853 22:04:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.853 22:04:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.853 22:04:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.853 [2024-07-24 22:04:37.866163] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:42.853 [2024-07-24 22:04:37.866217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381317 ] 00:05:42.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.853 [2024-07-24 22:04:37.921231] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.853 [2024-07-24 22:04:37.921261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.853 [2024-07-24 22:04:37.955465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.853 [2024-07-24 22:04:37.955581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.792 22:04:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.792 22:04:38 -- common/autotest_common.sh@852 -- # return 0 00:05:43.792 22:04:38 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3381547 00:05:43.792 22:04:38 -- event/cpu_locks.sh@103 -- # waitforlisten 3381547 /var/tmp/spdk2.sock 00:05:43.792 22:04:38 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.792 22:04:38 -- common/autotest_common.sh@819 -- # '[' -z 3381547 ']' 00:05:43.792 22:04:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.792 22:04:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.792 22:04:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.792 22:04:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.792 22:04:38 -- common/autotest_common.sh@10 -- # set +x 00:05:43.792 [2024-07-24 22:04:38.697352] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:43.792 [2024-07-24 22:04:38.697400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381547 ] 00:05:43.792 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.792 [2024-07-24 22:04:38.772291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.792 [2024-07-24 22:04:38.845899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.792 [2024-07-24 22:04:38.846050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.732 22:04:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.732 22:04:39 -- common/autotest_common.sh@852 -- # return 0 00:05:44.732 22:04:39 -- event/cpu_locks.sh@105 -- # locks_exist 3381547 00:05:44.732 22:04:39 -- event/cpu_locks.sh@22 -- # lslocks -p 3381547 00:05:44.732 22:04:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.992 lslocks: write error 00:05:44.992 22:04:40 -- event/cpu_locks.sh@107 -- # killprocess 3381317 00:05:44.992 22:04:40 -- common/autotest_common.sh@926 -- # '[' -z 3381317 ']' 00:05:44.992 22:04:40 -- common/autotest_common.sh@930 -- # kill -0 3381317 00:05:44.992 22:04:40 -- common/autotest_common.sh@931 -- # uname 00:05:44.992 22:04:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.992 22:04:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3381317 00:05:44.992 22:04:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.992 22:04:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.992 22:04:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3381317' 00:05:44.992 killing process with pid 3381317 00:05:44.992 22:04:40 -- common/autotest_common.sh@945 -- # kill 3381317 00:05:44.992 22:04:40 -- common/autotest_common.sh@950 -- # wait 3381317 00:05:45.562 22:04:40 -- event/cpu_locks.sh@108 -- # killprocess 3381547 00:05:45.562 22:04:40 -- common/autotest_common.sh@926 -- # '[' -z 3381547 ']' 00:05:45.562 22:04:40 -- common/autotest_common.sh@930 -- # kill -0 3381547 00:05:45.562 22:04:40 -- common/autotest_common.sh@931 -- # uname 00:05:45.562 22:04:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.562 22:04:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3381547 00:05:45.823 22:04:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.823 22:04:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.823 22:04:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3381547' 00:05:45.823 killing process with pid 3381547 00:05:45.823 22:04:40 -- common/autotest_common.sh@945 -- # kill 3381547 00:05:45.823 22:04:40 -- common/autotest_common.sh@950 -- # wait 3381547 00:05:46.084 00:05:46.084 real 0m3.200s 00:05:46.084 user 0m3.435s 00:05:46.084 sys 0m0.919s 00:05:46.084 22:04:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.084 22:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.084 ************************************ 00:05:46.084 END TEST locking_app_on_unlocked_coremask 00:05:46.084 ************************************ 00:05:46.084 22:04:41 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.084 22:04:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.084 22:04:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.084 22:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.084 ************************************ 00:05:46.084 START TEST locking_app_on_locked_coremask 00:05:46.084 ************************************ 00:05:46.084 22:04:41 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:46.084 22:04:41 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3381857 00:05:46.084 22:04:41 -- event/cpu_locks.sh@116 -- # waitforlisten 3381857 /var/tmp/spdk.sock 00:05:46.084 22:04:41 -- common/autotest_common.sh@819 -- # '[' -z 3381857 ']' 00:05:46.084 22:04:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.084 22:04:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.084 22:04:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.084 22:04:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.084 22:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.084 22:04:41 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.084 [2024-07-24 22:04:41.096495] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:46.084 [2024-07-24 22:04:41.096544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3381857 ] 00:05:46.084 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.084 [2024-07-24 22:04:41.150178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.084 [2024-07-24 22:04:41.189210] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.084 [2024-07-24 22:04:41.189320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.024 22:04:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.024 22:04:41 -- common/autotest_common.sh@852 -- # return 0 00:05:47.024 22:04:41 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3382061 00:05:47.024 22:04:41 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3382061 /var/tmp/spdk2.sock 00:05:47.024 22:04:41 -- common/autotest_common.sh@640 -- # local es=0 00:05:47.024 22:04:41 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3382061 /var/tmp/spdk2.sock 00:05:47.024 22:04:41 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:47.024 22:04:41 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.024 22:04:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:47.024 22:04:41 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:47.024 22:04:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:47.024 22:04:41 -- common/autotest_common.sh@643 -- # waitforlisten 3382061 /var/tmp/spdk2.sock 00:05:47.024 22:04:41 -- common/autotest_common.sh@819 -- # '[' -z 3382061 ']' 00:05:47.024 22:04:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.024 22:04:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.024 22:04:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.024 22:04:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.024 22:04:41 -- common/autotest_common.sh@10 -- # set +x 00:05:47.024 [2024-07-24 22:04:41.915666] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:47.024 [2024-07-24 22:04:41.915716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382061 ] 00:05:47.024 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.024 [2024-07-24 22:04:41.991027] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3381857 has claimed it. 00:05:47.024 [2024-07-24 22:04:41.991069] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3382061) - No such process 00:05:47.594 ERROR: process (pid: 3382061) is no longer running 00:05:47.594 22:04:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.594 22:04:42 -- common/autotest_common.sh@852 -- # return 1 00:05:47.594 22:04:42 -- common/autotest_common.sh@643 -- # es=1 00:05:47.594 22:04:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:47.594 22:04:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:47.594 22:04:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:47.594 22:04:42 -- event/cpu_locks.sh@122 -- # locks_exist 3381857 00:05:47.594 22:04:42 -- event/cpu_locks.sh@22 -- # lslocks -p 3381857 00:05:47.594 22:04:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.594 lslocks: write error 00:05:47.594 22:04:42 -- event/cpu_locks.sh@124 -- # killprocess 3381857 00:05:47.594 22:04:42 -- common/autotest_common.sh@926 -- # '[' -z 3381857 ']' 00:05:47.594 22:04:42 -- common/autotest_common.sh@930 -- # kill -0 3381857 00:05:47.594 22:04:42 -- common/autotest_common.sh@931 -- # uname 00:05:47.594 22:04:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.594 22:04:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3381857 00:05:47.594 22:04:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.594 22:04:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.594 22:04:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3381857' 00:05:47.594 killing process with pid 3381857 00:05:47.594 22:04:42 -- common/autotest_common.sh@945 -- # kill 3381857 00:05:47.594 22:04:42 -- common/autotest_common.sh@950 -- # wait 3381857 00:05:48.165 00:05:48.165 real 0m1.958s 00:05:48.165 user 0m2.156s 00:05:48.165 sys 0m0.482s 00:05:48.165 22:04:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.165 22:04:43 -- common/autotest_common.sh@10 -- # set +x 00:05:48.165 ************************************ 00:05:48.165 END TEST locking_app_on_locked_coremask 00:05:48.165 ************************************ 00:05:48.165 22:04:43 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:48.165 22:04:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.165 22:04:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.165 22:04:43 -- common/autotest_common.sh@10 -- # set +x 00:05:48.165 ************************************ 00:05:48.165 START TEST locking_overlapped_coremask 00:05:48.165 ************************************ 00:05:48.165 22:04:43 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:48.165 22:04:43 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3382320 00:05:48.165 22:04:43 -- event/cpu_locks.sh@133 -- # waitforlisten 3382320 /var/tmp/spdk.sock 00:05:48.165 22:04:43 -- common/autotest_common.sh@819 -- # '[' -z 3382320 ']' 00:05:48.165 22:04:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.165 22:04:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.165 22:04:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.165 22:04:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.165 22:04:43 -- common/autotest_common.sh@10 -- # set +x 00:05:48.165 22:04:43 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:48.165 [2024-07-24 22:04:43.089695] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:48.165 [2024-07-24 22:04:43.089742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382320 ] 00:05:48.165 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.165 [2024-07-24 22:04:43.142875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.165 [2024-07-24 22:04:43.183461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.165 [2024-07-24 22:04:43.183608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.165 [2024-07-24 22:04:43.183729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.165 [2024-07-24 22:04:43.183730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.106 22:04:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.106 22:04:43 -- common/autotest_common.sh@852 -- # return 0 00:05:49.106 22:04:43 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3382418 00:05:49.106 22:04:43 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3382418 /var/tmp/spdk2.sock 00:05:49.106 22:04:43 -- common/autotest_common.sh@640 -- # local es=0 00:05:49.106 22:04:43 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3382418 /var/tmp/spdk2.sock 00:05:49.106 22:04:43 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:49.106 22:04:43 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:49.106 22:04:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:49.106 22:04:43 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:49.106 22:04:43 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:49.106 22:04:43 -- common/autotest_common.sh@643 -- # waitforlisten 3382418 /var/tmp/spdk2.sock 00:05:49.106 22:04:43 -- common/autotest_common.sh@819 -- # '[' -z 3382418 ']' 00:05:49.106 22:04:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.106 22:04:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.107 22:04:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.107 22:04:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.107 22:04:43 -- common/autotest_common.sh@10 -- # set +x 00:05:49.107 [2024-07-24 22:04:43.927822] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:49.107 [2024-07-24 22:04:43.927875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382418 ] 00:05:49.107 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.107 [2024-07-24 22:04:44.004131] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3382320 has claimed it. 00:05:49.107 [2024-07-24 22:04:44.004168] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:49.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3382418) - No such process 00:05:49.677 ERROR: process (pid: 3382418) is no longer running 00:05:49.677 22:04:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.677 22:04:44 -- common/autotest_common.sh@852 -- # return 1 00:05:49.677 22:04:44 -- common/autotest_common.sh@643 -- # es=1 00:05:49.677 22:04:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:49.677 22:04:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:49.677 22:04:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:49.677 22:04:44 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:49.677 22:04:44 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:49.677 22:04:44 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:49.677 22:04:44 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:49.677 22:04:44 -- event/cpu_locks.sh@141 -- # killprocess 3382320 00:05:49.677 22:04:44 -- common/autotest_common.sh@926 -- # '[' -z 3382320 ']' 00:05:49.677 22:04:44 -- common/autotest_common.sh@930 -- # kill -0 3382320 00:05:49.677 22:04:44 -- common/autotest_common.sh@931 -- # uname 00:05:49.677 22:04:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:49.677 22:04:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3382320 00:05:49.677 22:04:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:49.677 22:04:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:49.677 22:04:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3382320' 00:05:49.677 killing process with pid 3382320 00:05:49.677 22:04:44 -- common/autotest_common.sh@945 -- # kill 3382320 00:05:49.677 22:04:44 -- common/autotest_common.sh@950 -- # wait 3382320 00:05:49.937 00:05:49.937 real 0m1.854s 00:05:49.937 user 0m5.321s 00:05:49.937 sys 0m0.395s 00:05:49.937 22:04:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.937 22:04:44 -- common/autotest_common.sh@10 -- # set +x 00:05:49.937 ************************************ 00:05:49.937 END TEST locking_overlapped_coremask 00:05:49.937 ************************************ 00:05:49.937 22:04:44 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:49.937 22:04:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:49.937 22:04:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.937 22:04:44 -- common/autotest_common.sh@10 -- # set +x 00:05:49.937 ************************************ 00:05:49.937 START TEST locking_overlapped_coremask_via_rpc 00:05:49.937 ************************************ 00:05:49.937 22:04:44 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:49.937 22:04:44 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3382597 00:05:49.937 22:04:44 -- event/cpu_locks.sh@149 -- # waitforlisten 3382597 /var/tmp/spdk.sock 00:05:49.937 22:04:44 -- common/autotest_common.sh@819 -- # '[' -z 3382597 ']' 00:05:49.937 22:04:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.937 22:04:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:49.937 22:04:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.937 22:04:44 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:49.937 22:04:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:49.937 22:04:44 -- common/autotest_common.sh@10 -- # set +x 00:05:49.937 [2024-07-24 22:04:44.982386] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:49.937 [2024-07-24 22:04:44.982433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382597 ] 00:05:49.937 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.938 [2024-07-24 22:04:45.033992] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.938 [2024-07-24 22:04:45.034015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.198 [2024-07-24 22:04:45.074366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.198 [2024-07-24 22:04:45.074505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.198 [2024-07-24 22:04:45.074621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.198 [2024-07-24 22:04:45.074622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.767 22:04:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:50.767 22:04:45 -- common/autotest_common.sh@852 -- # return 0 00:05:50.767 22:04:45 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3382829 00:05:50.767 22:04:45 -- event/cpu_locks.sh@153 -- # waitforlisten 3382829 /var/tmp/spdk2.sock 00:05:50.767 22:04:45 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:50.767 22:04:45 -- common/autotest_common.sh@819 -- # '[' -z 3382829 ']' 00:05:50.767 22:04:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.767 22:04:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.767 22:04:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.767 22:04:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.767 22:04:45 -- common/autotest_common.sh@10 -- # set +x 00:05:50.767 [2024-07-24 22:04:45.814937] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:50.767 [2024-07-24 22:04:45.814986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3382829 ] 00:05:50.767 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.767 [2024-07-24 22:04:45.891854] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.767 [2024-07-24 22:04:45.891877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.027 [2024-07-24 22:04:45.970890] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.027 [2024-07-24 22:04:45.971069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.027 [2024-07-24 22:04:45.971135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.027 [2024-07-24 22:04:45.971136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:51.598 22:04:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.598 22:04:46 -- common/autotest_common.sh@852 -- # return 0 00:05:51.598 22:04:46 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:51.598 22:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.598 22:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.598 22:04:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:51.598 22:04:46 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:51.598 22:04:46 -- common/autotest_common.sh@640 -- # local es=0 00:05:51.598 22:04:46 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:51.598 22:04:46 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:51.598 22:04:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:51.598 22:04:46 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:51.598 22:04:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:51.598 22:04:46 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:51.598 22:04:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.598 22:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.598 [2024-07-24 22:04:46.624110] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3382597 has claimed it. 00:05:51.598 request: 00:05:51.598 { 00:05:51.598 "method": "framework_enable_cpumask_locks", 00:05:51.598 "req_id": 1 00:05:51.598 } 00:05:51.598 Got JSON-RPC error response 00:05:51.598 response: 00:05:51.598 { 00:05:51.598 "code": -32603, 00:05:51.598 "message": "Failed to claim CPU core: 2" 00:05:51.598 } 00:05:51.598 22:04:46 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:51.598 22:04:46 -- common/autotest_common.sh@643 -- # es=1 00:05:51.598 22:04:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:51.598 22:04:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:51.598 22:04:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:51.598 22:04:46 -- event/cpu_locks.sh@158 -- # waitforlisten 3382597 /var/tmp/spdk.sock 00:05:51.598 22:04:46 -- common/autotest_common.sh@819 -- # '[' -z 3382597 ']' 00:05:51.598 22:04:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.598 22:04:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.598 22:04:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.598 22:04:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.598 22:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.858 22:04:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.858 22:04:46 -- common/autotest_common.sh@852 -- # return 0 00:05:51.858 22:04:46 -- event/cpu_locks.sh@159 -- # waitforlisten 3382829 /var/tmp/spdk2.sock 00:05:51.858 22:04:46 -- common/autotest_common.sh@819 -- # '[' -z 3382829 ']' 00:05:51.858 22:04:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.858 22:04:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.858 22:04:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.858 22:04:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.858 22:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:52.118 22:04:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.118 22:04:47 -- common/autotest_common.sh@852 -- # return 0 00:05:52.118 22:04:47 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:52.118 22:04:47 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:52.118 22:04:47 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:52.118 22:04:47 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:52.118 00:05:52.118 real 0m2.073s 00:05:52.118 user 0m0.828s 00:05:52.118 sys 0m0.178s 00:05:52.118 22:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.118 22:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.118 ************************************ 00:05:52.118 END TEST locking_overlapped_coremask_via_rpc 00:05:52.118 ************************************ 00:05:52.118 22:04:47 -- event/cpu_locks.sh@174 -- # cleanup 00:05:52.118 22:04:47 -- event/cpu_locks.sh@15 -- # [[ -z 3382597 ]] 00:05:52.118 22:04:47 -- event/cpu_locks.sh@15 -- # killprocess 3382597 00:05:52.118 22:04:47 -- common/autotest_common.sh@926 -- # '[' -z 3382597 ']' 00:05:52.118 22:04:47 -- common/autotest_common.sh@930 -- # kill -0 3382597 00:05:52.118 22:04:47 -- common/autotest_common.sh@931 -- # uname 00:05:52.118 22:04:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.118 22:04:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3382597 00:05:52.118 22:04:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:52.118 22:04:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:52.118 22:04:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3382597' 00:05:52.118 killing process with pid 3382597 00:05:52.118 22:04:47 -- common/autotest_common.sh@945 -- # kill 3382597 00:05:52.118 22:04:47 -- common/autotest_common.sh@950 -- # wait 3382597 00:05:52.378 22:04:47 -- event/cpu_locks.sh@16 -- # [[ -z 3382829 ]] 00:05:52.378 22:04:47 -- event/cpu_locks.sh@16 -- # killprocess 3382829 00:05:52.378 22:04:47 -- common/autotest_common.sh@926 -- # '[' -z 3382829 ']' 00:05:52.378 22:04:47 -- common/autotest_common.sh@930 -- # kill -0 3382829 00:05:52.378 22:04:47 -- common/autotest_common.sh@931 -- # uname 00:05:52.378 22:04:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:52.378 22:04:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3382829 00:05:52.378 22:04:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:52.378 22:04:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:52.378 22:04:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3382829' 00:05:52.378 killing process with pid 3382829 00:05:52.378 22:04:47 -- common/autotest_common.sh@945 -- # kill 3382829 00:05:52.378 22:04:47 -- common/autotest_common.sh@950 -- # wait 3382829 00:05:52.638 22:04:47 -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.639 22:04:47 -- event/cpu_locks.sh@1 -- # cleanup 00:05:52.639 22:04:47 -- event/cpu_locks.sh@15 -- # [[ -z 3382597 ]] 00:05:52.639 22:04:47 -- event/cpu_locks.sh@15 -- # killprocess 3382597 00:05:52.639 22:04:47 -- common/autotest_common.sh@926 -- # '[' -z 3382597 ']' 00:05:52.639 22:04:47 -- common/autotest_common.sh@930 -- # kill -0 3382597 00:05:52.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3382597) - No such process 00:05:52.639 22:04:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3382597 is not found' 00:05:52.639 Process with pid 3382597 is not found 00:05:52.639 22:04:47 -- event/cpu_locks.sh@16 -- # [[ -z 3382829 ]] 00:05:52.639 22:04:47 -- event/cpu_locks.sh@16 -- # killprocess 3382829 00:05:52.639 22:04:47 -- common/autotest_common.sh@926 -- # '[' -z 3382829 ']' 00:05:52.639 22:04:47 -- common/autotest_common.sh@930 -- # kill -0 3382829 00:05:52.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3382829) - No such process 00:05:52.639 22:04:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3382829 is not found' 00:05:52.639 Process with pid 3382829 is not found 00:05:52.639 22:04:47 -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.639 00:05:52.639 real 0m16.242s 00:05:52.639 user 0m28.682s 00:05:52.639 sys 0m4.595s 00:05:52.639 22:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.639 22:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.639 ************************************ 00:05:52.639 END TEST cpu_locks 00:05:52.639 ************************************ 00:05:52.899 00:05:52.899 real 0m40.309s 00:05:52.899 user 1m17.569s 00:05:52.899 sys 0m7.493s 00:05:52.899 22:04:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.899 22:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.899 ************************************ 00:05:52.899 END TEST event 00:05:52.899 ************************************ 00:05:52.899 22:04:47 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:52.899 22:04:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.899 22:04:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.899 22:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.899 ************************************ 00:05:52.899 START TEST thread 00:05:52.899 ************************************ 00:05:52.899 22:04:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:52.899 * Looking for test storage... 00:05:52.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:52.899 22:04:47 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.900 22:04:47 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:52.900 22:04:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.900 22:04:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.900 ************************************ 00:05:52.900 START TEST thread_poller_perf 00:05:52.900 ************************************ 00:05:52.900 22:04:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.900 [2024-07-24 22:04:47.927834] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:52.900 [2024-07-24 22:04:47.927912] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383187 ] 00:05:52.900 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.900 [2024-07-24 22:04:47.985501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.900 [2024-07-24 22:04:48.024276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.900 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:54.280 ====================================== 00:05:54.280 busy:2309974920 (cyc) 00:05:54.280 total_run_count: 397000 00:05:54.280 tsc_hz: 2300000000 (cyc) 00:05:54.280 ====================================== 00:05:54.280 poller_cost: 5818 (cyc), 2529 (nsec) 00:05:54.280 00:05:54.280 real 0m1.181s 00:05:54.280 user 0m1.111s 00:05:54.280 sys 0m0.066s 00:05:54.280 22:04:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.280 22:04:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.280 ************************************ 00:05:54.280 END TEST thread_poller_perf 00:05:54.280 ************************************ 00:05:54.280 22:04:49 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.280 22:04:49 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:54.280 22:04:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.280 22:04:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.280 ************************************ 00:05:54.280 START TEST thread_poller_perf 00:05:54.280 ************************************ 00:05:54.280 22:04:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:54.280 [2024-07-24 22:04:49.145261] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:54.280 [2024-07-24 22:04:49.145340] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383416 ] 00:05:54.280 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.280 [2024-07-24 22:04:49.202278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.280 [2024-07-24 22:04:49.236199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.280 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:55.221 ====================================== 00:05:55.221 busy:2301876358 (cyc) 00:05:55.221 total_run_count: 5445000 00:05:55.221 tsc_hz: 2300000000 (cyc) 00:05:55.221 ====================================== 00:05:55.221 poller_cost: 422 (cyc), 183 (nsec) 00:05:55.221 00:05:55.221 real 0m1.176s 00:05:55.221 user 0m1.105s 00:05:55.221 sys 0m0.066s 00:05:55.221 22:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.221 22:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:55.221 ************************************ 00:05:55.221 END TEST thread_poller_perf 00:05:55.221 ************************************ 00:05:55.221 22:04:50 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:55.221 00:05:55.221 real 0m2.518s 00:05:55.221 user 0m2.278s 00:05:55.221 sys 0m0.251s 00:05:55.221 22:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.221 22:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:55.221 ************************************ 00:05:55.221 END TEST thread 00:05:55.221 ************************************ 00:05:55.481 22:04:50 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:55.481 22:04:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.481 22:04:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.481 22:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:55.481 ************************************ 00:05:55.481 START TEST accel 00:05:55.481 ************************************ 00:05:55.481 22:04:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:55.481 * Looking for test storage... 00:05:55.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:55.481 22:04:50 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:55.481 22:04:50 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:55.481 22:04:50 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.481 22:04:50 -- accel/accel.sh@59 -- # spdk_tgt_pid=3383704 00:05:55.481 22:04:50 -- accel/accel.sh@60 -- # waitforlisten 3383704 00:05:55.481 22:04:50 -- common/autotest_common.sh@819 -- # '[' -z 3383704 ']' 00:05:55.481 22:04:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.481 22:04:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.481 22:04:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.481 22:04:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.481 22:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:55.481 22:04:50 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:55.481 22:04:50 -- accel/accel.sh@58 -- # build_accel_config 00:05:55.481 22:04:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.481 22:04:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.481 22:04:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.481 22:04:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.481 22:04:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.481 22:04:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.481 22:04:50 -- accel/accel.sh@42 -- # jq -r . 00:05:55.481 [2024-07-24 22:04:50.493861] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:55.481 [2024-07-24 22:04:50.493910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383704 ] 00:05:55.481 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.481 [2024-07-24 22:04:50.549499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.481 [2024-07-24 22:04:50.588110] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.481 [2024-07-24 22:04:50.588232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.418 22:04:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.418 22:04:51 -- common/autotest_common.sh@852 -- # return 0 00:05:56.418 22:04:51 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:56.418 22:04:51 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:56.418 22:04:51 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:56.418 22:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:56.418 22:04:51 -- common/autotest_common.sh@10 -- # set +x 00:05:56.418 22:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # IFS== 00:05:56.418 22:04:51 -- accel/accel.sh@64 -- # read -r opc module 00:05:56.418 22:04:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:56.418 22:04:51 -- accel/accel.sh@67 -- # killprocess 3383704 00:05:56.418 22:04:51 -- common/autotest_common.sh@926 -- # '[' -z 3383704 ']' 00:05:56.418 22:04:51 -- common/autotest_common.sh@930 -- # kill -0 3383704 00:05:56.418 22:04:51 -- common/autotest_common.sh@931 -- # uname 00:05:56.418 22:04:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:56.418 22:04:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3383704 00:05:56.418 22:04:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:56.418 22:04:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:56.418 22:04:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3383704' 00:05:56.418 killing process with pid 3383704 00:05:56.418 22:04:51 -- common/autotest_common.sh@945 -- # kill 3383704 00:05:56.418 22:04:51 -- common/autotest_common.sh@950 -- # wait 3383704 00:05:56.678 22:04:51 -- accel/accel.sh@68 -- # trap - ERR 00:05:56.678 22:04:51 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:56.678 22:04:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:56.678 22:04:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.678 22:04:51 -- common/autotest_common.sh@10 -- # set +x 00:05:56.678 22:04:51 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:56.678 22:04:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:56.678 22:04:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.678 22:04:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.678 22:04:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.678 22:04:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.678 22:04:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.678 22:04:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.678 22:04:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.678 22:04:51 -- accel/accel.sh@42 -- # jq -r . 00:05:56.678 22:04:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.678 22:04:51 -- common/autotest_common.sh@10 -- # set +x 00:05:56.678 22:04:51 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:56.678 22:04:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:56.678 22:04:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.678 22:04:51 -- common/autotest_common.sh@10 -- # set +x 00:05:56.678 ************************************ 00:05:56.678 START TEST accel_missing_filename 00:05:56.678 ************************************ 00:05:56.678 22:04:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:56.678 22:04:51 -- common/autotest_common.sh@640 -- # local es=0 00:05:56.678 22:04:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:56.678 22:04:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:56.678 22:04:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:56.678 22:04:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:56.678 22:04:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:56.678 22:04:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:56.678 22:04:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:56.678 22:04:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.678 22:04:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.678 22:04:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.678 22:04:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.678 22:04:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.678 22:04:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.678 22:04:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.678 22:04:51 -- accel/accel.sh@42 -- # jq -r . 00:05:56.678 [2024-07-24 22:04:51.772328] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:56.678 [2024-07-24 22:04:51.772399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383977 ] 00:05:56.678 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.938 [2024-07-24 22:04:51.827805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.938 [2024-07-24 22:04:51.864560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.938 [2024-07-24 22:04:51.905114] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:56.938 [2024-07-24 22:04:51.964551] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:56.938 A filename is required. 00:05:56.938 22:04:52 -- common/autotest_common.sh@643 -- # es=234 00:05:56.938 22:04:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:56.938 22:04:52 -- common/autotest_common.sh@652 -- # es=106 00:05:56.938 22:04:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:56.938 22:04:52 -- common/autotest_common.sh@660 -- # es=1 00:05:56.938 22:04:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:56.938 00:05:56.938 real 0m0.281s 00:05:56.938 user 0m0.204s 00:05:56.938 sys 0m0.119s 00:05:56.939 22:04:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.939 22:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:56.939 ************************************ 00:05:56.939 END TEST accel_missing_filename 00:05:56.939 ************************************ 00:05:56.939 22:04:52 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:56.939 22:04:52 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:56.939 22:04:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.939 22:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:56.939 ************************************ 00:05:56.939 START TEST accel_compress_verify 00:05:56.939 ************************************ 00:05:56.939 22:04:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:56.939 22:04:52 -- common/autotest_common.sh@640 -- # local es=0 00:05:56.939 22:04:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:56.939 22:04:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:56.939 22:04:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:56.939 22:04:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:56.939 22:04:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:56.939 22:04:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:56.939 22:04:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:56.939 22:04:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.939 22:04:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.939 22:04:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.939 22:04:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.939 22:04:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.939 22:04:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.939 22:04:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.939 22:04:52 -- accel/accel.sh@42 -- # jq -r . 00:05:57.198 [2024-07-24 22:04:52.083715] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:57.199 [2024-07-24 22:04:52.083778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384019 ] 00:05:57.199 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.199 [2024-07-24 22:04:52.138700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.199 [2024-07-24 22:04:52.175771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.199 [2024-07-24 22:04:52.216422] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.199 [2024-07-24 22:04:52.276196] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:57.459 00:05:57.459 Compression does not support the verify option, aborting. 00:05:57.459 22:04:52 -- common/autotest_common.sh@643 -- # es=161 00:05:57.459 22:04:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:57.459 22:04:52 -- common/autotest_common.sh@652 -- # es=33 00:05:57.459 22:04:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:57.459 22:04:52 -- common/autotest_common.sh@660 -- # es=1 00:05:57.459 22:04:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:57.459 00:05:57.459 real 0m0.280s 00:05:57.459 user 0m0.204s 00:05:57.459 sys 0m0.115s 00:05:57.459 22:04:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.459 22:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:57.459 ************************************ 00:05:57.459 END TEST accel_compress_verify 00:05:57.459 ************************************ 00:05:57.459 22:04:52 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:57.459 22:04:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:57.459 22:04:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.459 22:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:57.459 ************************************ 00:05:57.459 START TEST accel_wrong_workload 00:05:57.459 ************************************ 00:05:57.459 22:04:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:57.459 22:04:52 -- common/autotest_common.sh@640 -- # local es=0 00:05:57.459 22:04:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:57.459 22:04:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:57.459 22:04:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.459 22:04:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:57.459 22:04:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.459 22:04:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:57.459 22:04:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:57.459 22:04:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.459 22:04:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.459 22:04:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.459 22:04:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.459 22:04:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.459 22:04:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.459 22:04:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.459 22:04:52 -- accel/accel.sh@42 -- # jq -r . 00:05:57.459 Unsupported workload type: foobar 00:05:57.459 [2024-07-24 22:04:52.397160] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:57.459 accel_perf options: 00:05:57.459 [-h help message] 00:05:57.459 [-q queue depth per core] 00:05:57.459 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:57.459 [-T number of threads per core 00:05:57.459 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:57.459 [-t time in seconds] 00:05:57.459 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:57.459 [ dif_verify, , dif_generate, dif_generate_copy 00:05:57.460 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:57.460 [-l for compress/decompress workloads, name of uncompressed input file 00:05:57.460 [-S for crc32c workload, use this seed value (default 0) 00:05:57.460 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:57.460 [-f for fill workload, use this BYTE value (default 255) 00:05:57.460 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:57.460 [-y verify result if this switch is on] 00:05:57.460 [-a tasks to allocate per core (default: same value as -q)] 00:05:57.460 Can be used to spread operations across a wider range of memory. 00:05:57.460 22:04:52 -- common/autotest_common.sh@643 -- # es=1 00:05:57.460 22:04:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:57.460 22:04:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:57.460 22:04:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:57.460 00:05:57.460 real 0m0.029s 00:05:57.460 user 0m0.020s 00:05:57.460 sys 0m0.008s 00:05:57.460 22:04:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.460 22:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 ************************************ 00:05:57.460 END TEST accel_wrong_workload 00:05:57.460 ************************************ 00:05:57.460 Error: writing output failed: Broken pipe 00:05:57.460 22:04:52 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:57.460 22:04:52 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:57.460 22:04:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.460 22:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 ************************************ 00:05:57.460 START TEST accel_negative_buffers 00:05:57.460 ************************************ 00:05:57.460 22:04:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:57.460 22:04:52 -- common/autotest_common.sh@640 -- # local es=0 00:05:57.460 22:04:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:57.460 22:04:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:57.460 22:04:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.460 22:04:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:57.460 22:04:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:57.460 22:04:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:57.460 22:04:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:57.460 22:04:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.460 22:04:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.460 22:04:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.460 22:04:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.460 22:04:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.460 22:04:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.460 22:04:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.460 22:04:52 -- accel/accel.sh@42 -- # jq -r . 00:05:57.460 -x option must be non-negative. 00:05:57.460 [2024-07-24 22:04:52.452488] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:57.460 accel_perf options: 00:05:57.460 [-h help message] 00:05:57.460 [-q queue depth per core] 00:05:57.460 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:57.460 [-T number of threads per core 00:05:57.460 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:57.460 [-t time in seconds] 00:05:57.460 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:57.460 [ dif_verify, , dif_generate, dif_generate_copy 00:05:57.460 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:57.460 [-l for compress/decompress workloads, name of uncompressed input file 00:05:57.460 [-S for crc32c workload, use this seed value (default 0) 00:05:57.460 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:57.460 [-f for fill workload, use this BYTE value (default 255) 00:05:57.460 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:57.460 [-y verify result if this switch is on] 00:05:57.460 [-a tasks to allocate per core (default: same value as -q)] 00:05:57.460 Can be used to spread operations across a wider range of memory. 00:05:57.460 22:04:52 -- common/autotest_common.sh@643 -- # es=1 00:05:57.460 22:04:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:57.460 22:04:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:57.460 22:04:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:57.460 00:05:57.460 real 0m0.026s 00:05:57.460 user 0m0.016s 00:05:57.460 sys 0m0.010s 00:05:57.460 22:04:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.460 22:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 ************************************ 00:05:57.460 END TEST accel_negative_buffers 00:05:57.460 ************************************ 00:05:57.460 Error: writing output failed: Broken pipe 00:05:57.460 22:04:52 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:57.460 22:04:52 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:57.460 22:04:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.460 22:04:52 -- common/autotest_common.sh@10 -- # set +x 00:05:57.460 ************************************ 00:05:57.460 START TEST accel_crc32c 00:05:57.460 ************************************ 00:05:57.460 22:04:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:57.460 22:04:52 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.460 22:04:52 -- accel/accel.sh@17 -- # local accel_module 00:05:57.460 22:04:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:57.460 22:04:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:57.460 22:04:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.460 22:04:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.460 22:04:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.460 22:04:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.460 22:04:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.460 22:04:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.460 22:04:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.460 22:04:52 -- accel/accel.sh@42 -- # jq -r . 00:05:57.460 [2024-07-24 22:04:52.519330] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:57.460 [2024-07-24 22:04:52.519406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384269 ] 00:05:57.460 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.460 [2024-07-24 22:04:52.574304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.749 [2024-07-24 22:04:52.614236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.693 22:04:53 -- accel/accel.sh@18 -- # out=' 00:05:58.693 SPDK Configuration: 00:05:58.693 Core mask: 0x1 00:05:58.693 00:05:58.693 Accel Perf Configuration: 00:05:58.693 Workload Type: crc32c 00:05:58.693 CRC-32C seed: 32 00:05:58.693 Transfer size: 4096 bytes 00:05:58.693 Vector count 1 00:05:58.693 Module: software 00:05:58.693 Queue depth: 32 00:05:58.693 Allocate depth: 32 00:05:58.693 # threads/core: 1 00:05:58.693 Run time: 1 seconds 00:05:58.693 Verify: Yes 00:05:58.693 00:05:58.693 Running for 1 seconds... 00:05:58.693 00:05:58.693 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:58.693 ------------------------------------------------------------------------------------ 00:05:58.693 0,0 574976/s 2246 MiB/s 0 0 00:05:58.693 ==================================================================================== 00:05:58.693 Total 574976/s 2246 MiB/s 0 0' 00:05:58.693 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.693 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.693 22:04:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:58.693 22:04:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:58.693 22:04:53 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.693 22:04:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.693 22:04:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.693 22:04:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.693 22:04:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.693 22:04:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.693 22:04:53 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.693 22:04:53 -- accel/accel.sh@42 -- # jq -r . 00:05:58.693 [2024-07-24 22:04:53.807282] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:05:58.693 [2024-07-24 22:04:53.807356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384513 ] 00:05:58.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.953 [2024-07-24 22:04:53.862540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.953 [2024-07-24 22:04:53.899165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.953 22:04:53 -- accel/accel.sh@21 -- # val= 00:05:58.953 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.953 22:04:53 -- accel/accel.sh@21 -- # val= 00:05:58.953 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.953 22:04:53 -- accel/accel.sh@21 -- # val=0x1 00:05:58.953 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.953 22:04:53 -- accel/accel.sh@21 -- # val= 00:05:58.953 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.953 22:04:53 -- accel/accel.sh@21 -- # val= 00:05:58.953 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.953 22:04:53 -- accel/accel.sh@21 -- # val=crc32c 00:05:58.953 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.953 22:04:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:58.953 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val=32 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val= 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val=software 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@23 -- # accel_module=software 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val=32 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val=32 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val=1 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val=Yes 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val= 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:05:58.954 22:04:53 -- accel/accel.sh@21 -- # val= 00:05:58.954 22:04:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # IFS=: 00:05:58.954 22:04:53 -- accel/accel.sh@20 -- # read -r var val 00:06:00.334 22:04:55 -- accel/accel.sh@21 -- # val= 00:06:00.334 22:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:00.334 22:04:55 -- accel/accel.sh@21 -- # val= 00:06:00.334 22:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:00.334 22:04:55 -- accel/accel.sh@21 -- # val= 00:06:00.334 22:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:00.334 22:04:55 -- accel/accel.sh@21 -- # val= 00:06:00.334 22:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:00.334 22:04:55 -- accel/accel.sh@21 -- # val= 00:06:00.334 22:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:00.334 22:04:55 -- accel/accel.sh@21 -- # val= 00:06:00.334 22:04:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # IFS=: 00:06:00.334 22:04:55 -- accel/accel.sh@20 -- # read -r var val 00:06:00.334 22:04:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.334 22:04:55 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:00.334 22:04:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.334 00:06:00.334 real 0m2.574s 00:06:00.334 user 0m2.353s 00:06:00.334 sys 0m0.219s 00:06:00.334 22:04:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.334 22:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:00.334 ************************************ 00:06:00.334 END TEST accel_crc32c 00:06:00.334 ************************************ 00:06:00.334 22:04:55 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:00.334 22:04:55 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:00.334 22:04:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.334 22:04:55 -- common/autotest_common.sh@10 -- # set +x 00:06:00.334 ************************************ 00:06:00.334 START TEST accel_crc32c_C2 00:06:00.334 ************************************ 00:06:00.334 22:04:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:00.334 22:04:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.334 22:04:55 -- accel/accel.sh@17 -- # local accel_module 00:06:00.334 22:04:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:00.334 22:04:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:00.335 22:04:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.335 22:04:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.335 22:04:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.335 22:04:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.335 22:04:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.335 22:04:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.335 22:04:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.335 22:04:55 -- accel/accel.sh@42 -- # jq -r . 00:06:00.335 [2024-07-24 22:04:55.122861] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:00.335 [2024-07-24 22:04:55.122936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384761 ] 00:06:00.335 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.335 [2024-07-24 22:04:55.177411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.335 [2024-07-24 22:04:55.214028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.272 22:04:56 -- accel/accel.sh@18 -- # out=' 00:06:01.272 SPDK Configuration: 00:06:01.272 Core mask: 0x1 00:06:01.272 00:06:01.272 Accel Perf Configuration: 00:06:01.272 Workload Type: crc32c 00:06:01.272 CRC-32C seed: 0 00:06:01.272 Transfer size: 4096 bytes 00:06:01.272 Vector count 2 00:06:01.272 Module: software 00:06:01.272 Queue depth: 32 00:06:01.272 Allocate depth: 32 00:06:01.272 # threads/core: 1 00:06:01.272 Run time: 1 seconds 00:06:01.272 Verify: Yes 00:06:01.272 00:06:01.272 Running for 1 seconds... 00:06:01.272 00:06:01.272 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:01.272 ------------------------------------------------------------------------------------ 00:06:01.272 0,0 452384/s 3534 MiB/s 0 0 00:06:01.272 ==================================================================================== 00:06:01.272 Total 452384/s 1767 MiB/s 0 0' 00:06:01.272 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.272 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.272 22:04:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:01.272 22:04:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:01.272 22:04:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.272 22:04:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.272 22:04:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.272 22:04:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.272 22:04:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.272 22:04:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.272 22:04:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.272 22:04:56 -- accel/accel.sh@42 -- # jq -r . 00:06:01.272 [2024-07-24 22:04:56.406390] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:01.272 [2024-07-24 22:04:56.406467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3384953 ] 00:06:01.531 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.531 [2024-07-24 22:04:56.461911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.531 [2024-07-24 22:04:56.498772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val= 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val= 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val=0x1 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val= 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val= 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val=crc32c 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val=0 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val= 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val=software 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val=32 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val=32 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val=1 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val=Yes 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val= 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:01.531 22:04:56 -- accel/accel.sh@21 -- # val= 00:06:01.531 22:04:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # IFS=: 00:06:01.531 22:04:56 -- accel/accel.sh@20 -- # read -r var val 00:06:02.912 22:04:57 -- accel/accel.sh@21 -- # val= 00:06:02.912 22:04:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # IFS=: 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # read -r var val 00:06:02.912 22:04:57 -- accel/accel.sh@21 -- # val= 00:06:02.912 22:04:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # IFS=: 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # read -r var val 00:06:02.912 22:04:57 -- accel/accel.sh@21 -- # val= 00:06:02.912 22:04:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # IFS=: 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # read -r var val 00:06:02.912 22:04:57 -- accel/accel.sh@21 -- # val= 00:06:02.912 22:04:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # IFS=: 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # read -r var val 00:06:02.912 22:04:57 -- accel/accel.sh@21 -- # val= 00:06:02.912 22:04:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # IFS=: 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # read -r var val 00:06:02.912 22:04:57 -- accel/accel.sh@21 -- # val= 00:06:02.912 22:04:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # IFS=: 00:06:02.912 22:04:57 -- accel/accel.sh@20 -- # read -r var val 00:06:02.912 22:04:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:02.912 22:04:57 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:02.912 22:04:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.912 00:06:02.912 real 0m2.569s 00:06:02.912 user 0m2.341s 00:06:02.912 sys 0m0.225s 00:06:02.912 22:04:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.912 22:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:02.912 ************************************ 00:06:02.912 END TEST accel_crc32c_C2 00:06:02.912 ************************************ 00:06:02.912 22:04:57 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:02.912 22:04:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:02.912 22:04:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.912 22:04:57 -- common/autotest_common.sh@10 -- # set +x 00:06:02.912 ************************************ 00:06:02.912 START TEST accel_copy 00:06:02.912 ************************************ 00:06:02.912 22:04:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:02.912 22:04:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.912 22:04:57 -- accel/accel.sh@17 -- # local accel_module 00:06:02.912 22:04:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:02.912 22:04:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:02.912 22:04:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.912 22:04:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:02.912 22:04:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.912 22:04:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.912 22:04:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:02.912 22:04:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:02.912 22:04:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:02.912 22:04:57 -- accel/accel.sh@42 -- # jq -r . 00:06:02.912 [2024-07-24 22:04:57.721616] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:02.912 [2024-07-24 22:04:57.721691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385159 ] 00:06:02.912 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.912 [2024-07-24 22:04:57.776213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.912 [2024-07-24 22:04:57.813372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.852 22:04:58 -- accel/accel.sh@18 -- # out=' 00:06:03.852 SPDK Configuration: 00:06:03.852 Core mask: 0x1 00:06:03.852 00:06:03.852 Accel Perf Configuration: 00:06:03.852 Workload Type: copy 00:06:03.852 Transfer size: 4096 bytes 00:06:03.852 Vector count 1 00:06:03.852 Module: software 00:06:03.852 Queue depth: 32 00:06:03.852 Allocate depth: 32 00:06:03.852 # threads/core: 1 00:06:03.852 Run time: 1 seconds 00:06:03.852 Verify: Yes 00:06:03.852 00:06:03.852 Running for 1 seconds... 00:06:03.852 00:06:03.852 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.852 ------------------------------------------------------------------------------------ 00:06:03.852 0,0 425152/s 1660 MiB/s 0 0 00:06:03.852 ==================================================================================== 00:06:03.852 Total 425152/s 1660 MiB/s 0 0' 00:06:03.852 22:04:58 -- accel/accel.sh@20 -- # IFS=: 00:06:03.852 22:04:58 -- accel/accel.sh@20 -- # read -r var val 00:06:03.852 22:04:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:03.852 22:04:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:03.852 22:04:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.852 22:04:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.852 22:04:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.852 22:04:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.852 22:04:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.112 22:04:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.112 22:04:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.112 22:04:58 -- accel/accel.sh@42 -- # jq -r . 00:06:04.112 [2024-07-24 22:04:59.006255] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:04.112 [2024-07-24 22:04:59.006332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385345 ] 00:06:04.112 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.112 [2024-07-24 22:04:59.061098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.112 [2024-07-24 22:04:59.098156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.112 22:04:59 -- accel/accel.sh@21 -- # val= 00:06:04.112 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.112 22:04:59 -- accel/accel.sh@21 -- # val= 00:06:04.112 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.112 22:04:59 -- accel/accel.sh@21 -- # val=0x1 00:06:04.112 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.112 22:04:59 -- accel/accel.sh@21 -- # val= 00:06:04.112 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.112 22:04:59 -- accel/accel.sh@21 -- # val= 00:06:04.112 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.112 22:04:59 -- accel/accel.sh@21 -- # val=copy 00:06:04.112 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.112 22:04:59 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.112 22:04:59 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.112 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.112 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.112 22:04:59 -- accel/accel.sh@21 -- # val= 00:06:04.113 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.113 22:04:59 -- accel/accel.sh@21 -- # val=software 00:06:04.113 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.113 22:04:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.113 22:04:59 -- accel/accel.sh@21 -- # val=32 00:06:04.113 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.113 22:04:59 -- accel/accel.sh@21 -- # val=32 00:06:04.113 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.113 22:04:59 -- accel/accel.sh@21 -- # val=1 00:06:04.113 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.113 22:04:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.113 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.113 22:04:59 -- accel/accel.sh@21 -- # val=Yes 00:06:04.113 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.113 22:04:59 -- accel/accel.sh@21 -- # val= 00:06:04.113 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:04.113 22:04:59 -- accel/accel.sh@21 -- # val= 00:06:04.113 22:04:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # IFS=: 00:06:04.113 22:04:59 -- accel/accel.sh@20 -- # read -r var val 00:06:05.494 22:05:00 -- accel/accel.sh@21 -- # val= 00:06:05.494 22:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # IFS=: 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # read -r var val 00:06:05.494 22:05:00 -- accel/accel.sh@21 -- # val= 00:06:05.494 22:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # IFS=: 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # read -r var val 00:06:05.494 22:05:00 -- accel/accel.sh@21 -- # val= 00:06:05.494 22:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # IFS=: 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # read -r var val 00:06:05.494 22:05:00 -- accel/accel.sh@21 -- # val= 00:06:05.494 22:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # IFS=: 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # read -r var val 00:06:05.494 22:05:00 -- accel/accel.sh@21 -- # val= 00:06:05.494 22:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # IFS=: 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # read -r var val 00:06:05.494 22:05:00 -- accel/accel.sh@21 -- # val= 00:06:05.494 22:05:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # IFS=: 00:06:05.494 22:05:00 -- accel/accel.sh@20 -- # read -r var val 00:06:05.494 22:05:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:05.494 22:05:00 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:05.494 22:05:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.494 00:06:05.494 real 0m2.571s 00:06:05.494 user 0m2.341s 00:06:05.494 sys 0m0.227s 00:06:05.494 22:05:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.494 22:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:05.494 ************************************ 00:06:05.494 END TEST accel_copy 00:06:05.494 ************************************ 00:06:05.494 22:05:00 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.494 22:05:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:05.494 22:05:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.494 22:05:00 -- common/autotest_common.sh@10 -- # set +x 00:06:05.494 ************************************ 00:06:05.494 START TEST accel_fill 00:06:05.494 ************************************ 00:06:05.494 22:05:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.494 22:05:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.494 22:05:00 -- accel/accel.sh@17 -- # local accel_module 00:06:05.494 22:05:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.494 22:05:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:05.494 22:05:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.494 22:05:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.494 22:05:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.494 22:05:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.494 22:05:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.494 22:05:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.494 22:05:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.494 22:05:00 -- accel/accel.sh@42 -- # jq -r . 00:06:05.494 [2024-07-24 22:05:00.317016] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:05.494 [2024-07-24 22:05:00.317078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385555 ] 00:06:05.494 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.494 [2024-07-24 22:05:00.365866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.494 [2024-07-24 22:05:00.403270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.877 22:05:01 -- accel/accel.sh@18 -- # out=' 00:06:06.877 SPDK Configuration: 00:06:06.877 Core mask: 0x1 00:06:06.877 00:06:06.877 Accel Perf Configuration: 00:06:06.877 Workload Type: fill 00:06:06.877 Fill pattern: 0x80 00:06:06.877 Transfer size: 4096 bytes 00:06:06.877 Vector count 1 00:06:06.877 Module: software 00:06:06.877 Queue depth: 64 00:06:06.877 Allocate depth: 64 00:06:06.877 # threads/core: 1 00:06:06.877 Run time: 1 seconds 00:06:06.877 Verify: Yes 00:06:06.877 00:06:06.877 Running for 1 seconds... 00:06:06.877 00:06:06.877 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.877 ------------------------------------------------------------------------------------ 00:06:06.877 0,0 657984/s 2570 MiB/s 0 0 00:06:06.877 ==================================================================================== 00:06:06.877 Total 657984/s 2570 MiB/s 0 0' 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.877 22:05:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:06.877 22:05:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.877 22:05:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.877 22:05:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.877 22:05:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.877 22:05:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.877 22:05:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.877 22:05:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.877 22:05:01 -- accel/accel.sh@42 -- # jq -r . 00:06:06.877 [2024-07-24 22:05:01.595336] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:06.877 [2024-07-24 22:05:01.595420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385749 ] 00:06:06.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.877 [2024-07-24 22:05:01.652966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.877 [2024-07-24 22:05:01.690664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val= 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val= 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val=0x1 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val= 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val= 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val=fill 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val=0x80 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val= 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val=software 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val=64 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val=64 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val=1 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.877 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.877 22:05:01 -- accel/accel.sh@21 -- # val=Yes 00:06:06.877 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.878 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.878 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.878 22:05:01 -- accel/accel.sh@21 -- # val= 00:06:06.878 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.878 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.878 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:06.878 22:05:01 -- accel/accel.sh@21 -- # val= 00:06:06.878 22:05:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.878 22:05:01 -- accel/accel.sh@20 -- # IFS=: 00:06:06.878 22:05:01 -- accel/accel.sh@20 -- # read -r var val 00:06:07.817 22:05:02 -- accel/accel.sh@21 -- # val= 00:06:07.817 22:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # IFS=: 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # read -r var val 00:06:07.817 22:05:02 -- accel/accel.sh@21 -- # val= 00:06:07.817 22:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # IFS=: 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # read -r var val 00:06:07.817 22:05:02 -- accel/accel.sh@21 -- # val= 00:06:07.817 22:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # IFS=: 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # read -r var val 00:06:07.817 22:05:02 -- accel/accel.sh@21 -- # val= 00:06:07.817 22:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # IFS=: 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # read -r var val 00:06:07.817 22:05:02 -- accel/accel.sh@21 -- # val= 00:06:07.817 22:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # IFS=: 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # read -r var val 00:06:07.817 22:05:02 -- accel/accel.sh@21 -- # val= 00:06:07.817 22:05:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # IFS=: 00:06:07.817 22:05:02 -- accel/accel.sh@20 -- # read -r var val 00:06:07.817 22:05:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:07.818 22:05:02 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:07.818 22:05:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.818 00:06:07.818 real 0m2.557s 00:06:07.818 user 0m2.347s 00:06:07.818 sys 0m0.208s 00:06:07.818 22:05:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.818 22:05:02 -- common/autotest_common.sh@10 -- # set +x 00:06:07.818 ************************************ 00:06:07.818 END TEST accel_fill 00:06:07.818 ************************************ 00:06:07.818 22:05:02 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:07.818 22:05:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:07.818 22:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.818 22:05:02 -- common/autotest_common.sh@10 -- # set +x 00:06:07.818 ************************************ 00:06:07.818 START TEST accel_copy_crc32c 00:06:07.818 ************************************ 00:06:07.818 22:05:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:07.818 22:05:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:07.818 22:05:02 -- accel/accel.sh@17 -- # local accel_module 00:06:07.818 22:05:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:07.818 22:05:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:07.818 22:05:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.818 22:05:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.818 22:05:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.818 22:05:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.818 22:05:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.818 22:05:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.818 22:05:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.818 22:05:02 -- accel/accel.sh@42 -- # jq -r . 00:06:07.818 [2024-07-24 22:05:02.913811] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:07.818 [2024-07-24 22:05:02.913869] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386004 ] 00:06:07.818 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.078 [2024-07-24 22:05:02.967764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.078 [2024-07-24 22:05:03.005354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.459 22:05:04 -- accel/accel.sh@18 -- # out=' 00:06:09.459 SPDK Configuration: 00:06:09.459 Core mask: 0x1 00:06:09.459 00:06:09.459 Accel Perf Configuration: 00:06:09.459 Workload Type: copy_crc32c 00:06:09.459 CRC-32C seed: 0 00:06:09.459 Vector size: 4096 bytes 00:06:09.459 Transfer size: 4096 bytes 00:06:09.459 Vector count 1 00:06:09.459 Module: software 00:06:09.459 Queue depth: 32 00:06:09.459 Allocate depth: 32 00:06:09.459 # threads/core: 1 00:06:09.459 Run time: 1 seconds 00:06:09.459 Verify: Yes 00:06:09.459 00:06:09.459 Running for 1 seconds... 00:06:09.459 00:06:09.459 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:09.459 ------------------------------------------------------------------------------------ 00:06:09.459 0,0 326144/s 1274 MiB/s 0 0 00:06:09.459 ==================================================================================== 00:06:09.459 Total 326144/s 1274 MiB/s 0 0' 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:09.459 22:05:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:09.459 22:05:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.459 22:05:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.459 22:05:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.459 22:05:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.459 22:05:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.459 22:05:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.459 22:05:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.459 22:05:04 -- accel/accel.sh@42 -- # jq -r . 00:06:09.459 [2024-07-24 22:05:04.187391] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:09.459 [2024-07-24 22:05:04.187436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386236 ] 00:06:09.459 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.459 [2024-07-24 22:05:04.239539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.459 [2024-07-24 22:05:04.275853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val= 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val= 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val=0x1 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val= 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val= 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val=0 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val= 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val=software 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val=32 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val=32 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val=1 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val=Yes 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val= 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:09.459 22:05:04 -- accel/accel.sh@21 -- # val= 00:06:09.459 22:05:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # IFS=: 00:06:09.459 22:05:04 -- accel/accel.sh@20 -- # read -r var val 00:06:10.398 22:05:05 -- accel/accel.sh@21 -- # val= 00:06:10.398 22:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # IFS=: 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # read -r var val 00:06:10.398 22:05:05 -- accel/accel.sh@21 -- # val= 00:06:10.398 22:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # IFS=: 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # read -r var val 00:06:10.398 22:05:05 -- accel/accel.sh@21 -- # val= 00:06:10.398 22:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # IFS=: 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # read -r var val 00:06:10.398 22:05:05 -- accel/accel.sh@21 -- # val= 00:06:10.398 22:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # IFS=: 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # read -r var val 00:06:10.398 22:05:05 -- accel/accel.sh@21 -- # val= 00:06:10.398 22:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # IFS=: 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # read -r var val 00:06:10.398 22:05:05 -- accel/accel.sh@21 -- # val= 00:06:10.398 22:05:05 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # IFS=: 00:06:10.398 22:05:05 -- accel/accel.sh@20 -- # read -r var val 00:06:10.398 22:05:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:10.398 22:05:05 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:10.398 22:05:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.398 00:06:10.398 real 0m2.553s 00:06:10.398 user 0m2.329s 00:06:10.398 sys 0m0.222s 00:06:10.398 22:05:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.398 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:06:10.398 ************************************ 00:06:10.398 END TEST accel_copy_crc32c 00:06:10.398 ************************************ 00:06:10.398 22:05:05 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.398 22:05:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:10.398 22:05:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.398 22:05:05 -- common/autotest_common.sh@10 -- # set +x 00:06:10.398 ************************************ 00:06:10.398 START TEST accel_copy_crc32c_C2 00:06:10.398 ************************************ 00:06:10.398 22:05:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:10.398 22:05:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:10.398 22:05:05 -- accel/accel.sh@17 -- # local accel_module 00:06:10.398 22:05:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:10.398 22:05:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:10.398 22:05:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.398 22:05:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.398 22:05:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.398 22:05:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.398 22:05:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.398 22:05:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.398 22:05:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.398 22:05:05 -- accel/accel.sh@42 -- # jq -r . 00:06:10.398 [2024-07-24 22:05:05.506571] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:10.398 [2024-07-24 22:05:05.506652] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386485 ] 00:06:10.398 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.657 [2024-07-24 22:05:05.564689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.657 [2024-07-24 22:05:05.600882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.035 22:05:06 -- accel/accel.sh@18 -- # out=' 00:06:12.035 SPDK Configuration: 00:06:12.035 Core mask: 0x1 00:06:12.035 00:06:12.035 Accel Perf Configuration: 00:06:12.035 Workload Type: copy_crc32c 00:06:12.035 CRC-32C seed: 0 00:06:12.035 Vector size: 4096 bytes 00:06:12.035 Transfer size: 8192 bytes 00:06:12.035 Vector count 2 00:06:12.035 Module: software 00:06:12.035 Queue depth: 32 00:06:12.035 Allocate depth: 32 00:06:12.035 # threads/core: 1 00:06:12.035 Run time: 1 seconds 00:06:12.035 Verify: Yes 00:06:12.035 00:06:12.035 Running for 1 seconds... 00:06:12.035 00:06:12.035 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:12.035 ------------------------------------------------------------------------------------ 00:06:12.035 0,0 228224/s 1783 MiB/s 0 0 00:06:12.035 ==================================================================================== 00:06:12.035 Total 228224/s 891 MiB/s 0 0' 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:12.035 22:05:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:12.035 22:05:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.035 22:05:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.035 22:05:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.035 22:05:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.035 22:05:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.035 22:05:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.035 22:05:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.035 22:05:06 -- accel/accel.sh@42 -- # jq -r . 00:06:12.035 [2024-07-24 22:05:06.794504] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:12.035 [2024-07-24 22:05:06.794580] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386719 ] 00:06:12.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.035 [2024-07-24 22:05:06.849553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.035 [2024-07-24 22:05:06.886611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val= 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val= 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val=0x1 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val= 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val= 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val=0 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val= 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val=software 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val=32 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val=32 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val=1 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.035 22:05:06 -- accel/accel.sh@21 -- # val=Yes 00:06:12.035 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.035 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.036 22:05:06 -- accel/accel.sh@21 -- # val= 00:06:12.036 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.036 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.036 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.036 22:05:06 -- accel/accel.sh@21 -- # val= 00:06:12.036 22:05:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.036 22:05:06 -- accel/accel.sh@20 -- # IFS=: 00:06:12.036 22:05:06 -- accel/accel.sh@20 -- # read -r var val 00:06:12.975 22:05:08 -- accel/accel.sh@21 -- # val= 00:06:12.975 22:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.975 22:05:08 -- accel/accel.sh@20 -- # IFS=: 00:06:12.975 22:05:08 -- accel/accel.sh@20 -- # read -r var val 00:06:12.975 22:05:08 -- accel/accel.sh@21 -- # val= 00:06:12.975 22:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.975 22:05:08 -- accel/accel.sh@20 -- # IFS=: 00:06:12.975 22:05:08 -- accel/accel.sh@20 -- # read -r var val 00:06:12.975 22:05:08 -- accel/accel.sh@21 -- # val= 00:06:12.975 22:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.975 22:05:08 -- accel/accel.sh@20 -- # IFS=: 00:06:12.975 22:05:08 -- accel/accel.sh@20 -- # read -r var val 00:06:12.976 22:05:08 -- accel/accel.sh@21 -- # val= 00:06:12.976 22:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.976 22:05:08 -- accel/accel.sh@20 -- # IFS=: 00:06:12.976 22:05:08 -- accel/accel.sh@20 -- # read -r var val 00:06:12.976 22:05:08 -- accel/accel.sh@21 -- # val= 00:06:12.976 22:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.976 22:05:08 -- accel/accel.sh@20 -- # IFS=: 00:06:12.976 22:05:08 -- accel/accel.sh@20 -- # read -r var val 00:06:12.976 22:05:08 -- accel/accel.sh@21 -- # val= 00:06:12.976 22:05:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.976 22:05:08 -- accel/accel.sh@20 -- # IFS=: 00:06:12.976 22:05:08 -- accel/accel.sh@20 -- # read -r var val 00:06:12.976 22:05:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.976 22:05:08 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:12.976 22:05:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.976 00:06:12.976 real 0m2.576s 00:06:12.976 user 0m2.349s 00:06:12.976 sys 0m0.225s 00:06:12.976 22:05:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.976 22:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:12.976 ************************************ 00:06:12.976 END TEST accel_copy_crc32c_C2 00:06:12.976 ************************************ 00:06:12.976 22:05:08 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:12.976 22:05:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:12.976 22:05:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.976 22:05:08 -- common/autotest_common.sh@10 -- # set +x 00:06:12.976 ************************************ 00:06:12.976 START TEST accel_dualcast 00:06:12.976 ************************************ 00:06:12.976 22:05:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:12.976 22:05:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.976 22:05:08 -- accel/accel.sh@17 -- # local accel_module 00:06:12.976 22:05:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:12.976 22:05:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:12.976 22:05:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.976 22:05:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.976 22:05:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.976 22:05:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.976 22:05:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.976 22:05:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.976 22:05:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.976 22:05:08 -- accel/accel.sh@42 -- # jq -r . 00:06:13.236 [2024-07-24 22:05:08.110491] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:13.236 [2024-07-24 22:05:08.110583] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386975 ] 00:06:13.236 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.236 [2024-07-24 22:05:08.164728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.236 [2024-07-24 22:05:08.201967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.620 22:05:09 -- accel/accel.sh@18 -- # out=' 00:06:14.620 SPDK Configuration: 00:06:14.620 Core mask: 0x1 00:06:14.620 00:06:14.620 Accel Perf Configuration: 00:06:14.620 Workload Type: dualcast 00:06:14.620 Transfer size: 4096 bytes 00:06:14.620 Vector count 1 00:06:14.620 Module: software 00:06:14.620 Queue depth: 32 00:06:14.620 Allocate depth: 32 00:06:14.620 # threads/core: 1 00:06:14.620 Run time: 1 seconds 00:06:14.620 Verify: Yes 00:06:14.620 00:06:14.620 Running for 1 seconds... 00:06:14.620 00:06:14.620 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:14.620 ------------------------------------------------------------------------------------ 00:06:14.620 0,0 497344/s 1942 MiB/s 0 0 00:06:14.620 ==================================================================================== 00:06:14.620 Total 497344/s 1942 MiB/s 0 0' 00:06:14.620 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.620 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.620 22:05:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:14.620 22:05:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:14.620 22:05:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.620 22:05:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.620 22:05:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.620 22:05:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.620 22:05:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.620 22:05:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.620 22:05:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.620 22:05:09 -- accel/accel.sh@42 -- # jq -r . 00:06:14.620 [2024-07-24 22:05:09.383705] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:14.620 [2024-07-24 22:05:09.383751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387207 ] 00:06:14.620 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.620 [2024-07-24 22:05:09.436036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.620 [2024-07-24 22:05:09.472330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.620 22:05:09 -- accel/accel.sh@21 -- # val= 00:06:14.620 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.620 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.620 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.620 22:05:09 -- accel/accel.sh@21 -- # val= 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val=0x1 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val= 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val= 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val=dualcast 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val= 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val=software 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val=32 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val=32 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val=1 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val=Yes 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val= 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:14.621 22:05:09 -- accel/accel.sh@21 -- # val= 00:06:14.621 22:05:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # IFS=: 00:06:14.621 22:05:09 -- accel/accel.sh@20 -- # read -r var val 00:06:15.664 22:05:10 -- accel/accel.sh@21 -- # val= 00:06:15.664 22:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # IFS=: 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # read -r var val 00:06:15.664 22:05:10 -- accel/accel.sh@21 -- # val= 00:06:15.664 22:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # IFS=: 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # read -r var val 00:06:15.664 22:05:10 -- accel/accel.sh@21 -- # val= 00:06:15.664 22:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # IFS=: 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # read -r var val 00:06:15.664 22:05:10 -- accel/accel.sh@21 -- # val= 00:06:15.664 22:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # IFS=: 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # read -r var val 00:06:15.664 22:05:10 -- accel/accel.sh@21 -- # val= 00:06:15.664 22:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # IFS=: 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # read -r var val 00:06:15.664 22:05:10 -- accel/accel.sh@21 -- # val= 00:06:15.664 22:05:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # IFS=: 00:06:15.664 22:05:10 -- accel/accel.sh@20 -- # read -r var val 00:06:15.664 22:05:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.664 22:05:10 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:15.664 22:05:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.664 00:06:15.664 real 0m2.554s 00:06:15.664 user 0m2.342s 00:06:15.664 sys 0m0.208s 00:06:15.664 22:05:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.664 22:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:15.664 ************************************ 00:06:15.664 END TEST accel_dualcast 00:06:15.664 ************************************ 00:06:15.664 22:05:10 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:15.664 22:05:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:15.664 22:05:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.664 22:05:10 -- common/autotest_common.sh@10 -- # set +x 00:06:15.664 ************************************ 00:06:15.664 START TEST accel_compare 00:06:15.664 ************************************ 00:06:15.664 22:05:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:15.664 22:05:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.664 22:05:10 -- accel/accel.sh@17 -- # local accel_module 00:06:15.664 22:05:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:15.664 22:05:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:15.664 22:05:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.664 22:05:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.664 22:05:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.664 22:05:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.664 22:05:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.664 22:05:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.664 22:05:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.664 22:05:10 -- accel/accel.sh@42 -- # jq -r . 00:06:15.664 [2024-07-24 22:05:10.689124] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:15.664 [2024-07-24 22:05:10.689182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387460 ] 00:06:15.664 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.664 [2024-07-24 22:05:10.742850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.664 [2024-07-24 22:05:10.780255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.049 22:05:11 -- accel/accel.sh@18 -- # out=' 00:06:17.049 SPDK Configuration: 00:06:17.049 Core mask: 0x1 00:06:17.049 00:06:17.049 Accel Perf Configuration: 00:06:17.049 Workload Type: compare 00:06:17.049 Transfer size: 4096 bytes 00:06:17.049 Vector count 1 00:06:17.049 Module: software 00:06:17.049 Queue depth: 32 00:06:17.049 Allocate depth: 32 00:06:17.049 # threads/core: 1 00:06:17.049 Run time: 1 seconds 00:06:17.049 Verify: Yes 00:06:17.049 00:06:17.049 Running for 1 seconds... 00:06:17.049 00:06:17.049 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:17.049 ------------------------------------------------------------------------------------ 00:06:17.049 0,0 593056/s 2316 MiB/s 0 0 00:06:17.049 ==================================================================================== 00:06:17.049 Total 593056/s 2316 MiB/s 0 0' 00:06:17.049 22:05:11 -- accel/accel.sh@20 -- # IFS=: 00:06:17.049 22:05:11 -- accel/accel.sh@20 -- # read -r var val 00:06:17.049 22:05:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:17.049 22:05:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:17.049 22:05:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.049 22:05:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.049 22:05:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.049 22:05:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.049 22:05:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.049 22:05:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.049 22:05:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.049 22:05:11 -- accel/accel.sh@42 -- # jq -r . 00:06:17.049 [2024-07-24 22:05:11.962231] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:17.049 [2024-07-24 22:05:11.962279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387695 ] 00:06:17.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.049 [2024-07-24 22:05:12.014554] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.050 [2024-07-24 22:05:12.050443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val= 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val= 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val=0x1 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val= 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val= 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val=compare 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val= 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val=software 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val=32 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val=32 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val=1 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val=Yes 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val= 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:17.050 22:05:12 -- accel/accel.sh@21 -- # val= 00:06:17.050 22:05:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # IFS=: 00:06:17.050 22:05:12 -- accel/accel.sh@20 -- # read -r var val 00:06:18.430 22:05:13 -- accel/accel.sh@21 -- # val= 00:06:18.430 22:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # IFS=: 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # read -r var val 00:06:18.430 22:05:13 -- accel/accel.sh@21 -- # val= 00:06:18.430 22:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # IFS=: 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # read -r var val 00:06:18.430 22:05:13 -- accel/accel.sh@21 -- # val= 00:06:18.430 22:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # IFS=: 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # read -r var val 00:06:18.430 22:05:13 -- accel/accel.sh@21 -- # val= 00:06:18.430 22:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # IFS=: 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # read -r var val 00:06:18.430 22:05:13 -- accel/accel.sh@21 -- # val= 00:06:18.430 22:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # IFS=: 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # read -r var val 00:06:18.430 22:05:13 -- accel/accel.sh@21 -- # val= 00:06:18.430 22:05:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # IFS=: 00:06:18.430 22:05:13 -- accel/accel.sh@20 -- # read -r var val 00:06:18.430 22:05:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.430 22:05:13 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:18.430 22:05:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.430 00:06:18.430 real 0m2.548s 00:06:18.430 user 0m2.340s 00:06:18.430 sys 0m0.205s 00:06:18.430 22:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.430 22:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:18.430 ************************************ 00:06:18.430 END TEST accel_compare 00:06:18.430 ************************************ 00:06:18.430 22:05:13 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:18.430 22:05:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:18.430 22:05:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.430 22:05:13 -- common/autotest_common.sh@10 -- # set +x 00:06:18.430 ************************************ 00:06:18.430 START TEST accel_xor 00:06:18.430 ************************************ 00:06:18.430 22:05:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:18.430 22:05:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.430 22:05:13 -- accel/accel.sh@17 -- # local accel_module 00:06:18.430 22:05:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:18.430 22:05:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:18.430 22:05:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.430 22:05:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.430 22:05:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.430 22:05:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.430 22:05:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.430 22:05:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.430 22:05:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.430 22:05:13 -- accel/accel.sh@42 -- # jq -r . 00:06:18.430 [2024-07-24 22:05:13.274231] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:18.430 [2024-07-24 22:05:13.274305] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3387950 ] 00:06:18.430 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.430 [2024-07-24 22:05:13.328972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.430 [2024-07-24 22:05:13.366010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.810 22:05:14 -- accel/accel.sh@18 -- # out=' 00:06:19.810 SPDK Configuration: 00:06:19.810 Core mask: 0x1 00:06:19.810 00:06:19.810 Accel Perf Configuration: 00:06:19.810 Workload Type: xor 00:06:19.810 Source buffers: 2 00:06:19.810 Transfer size: 4096 bytes 00:06:19.810 Vector count 1 00:06:19.810 Module: software 00:06:19.810 Queue depth: 32 00:06:19.810 Allocate depth: 32 00:06:19.810 # threads/core: 1 00:06:19.810 Run time: 1 seconds 00:06:19.810 Verify: Yes 00:06:19.810 00:06:19.811 Running for 1 seconds... 00:06:19.811 00:06:19.811 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.811 ------------------------------------------------------------------------------------ 00:06:19.811 0,0 480736/s 1877 MiB/s 0 0 00:06:19.811 ==================================================================================== 00:06:19.811 Total 480736/s 1877 MiB/s 0 0' 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:19.811 22:05:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:19.811 22:05:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.811 22:05:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.811 22:05:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.811 22:05:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.811 22:05:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.811 22:05:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.811 22:05:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.811 22:05:14 -- accel/accel.sh@42 -- # jq -r . 00:06:19.811 [2024-07-24 22:05:14.547378] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:19.811 [2024-07-24 22:05:14.547438] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388185 ] 00:06:19.811 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.811 [2024-07-24 22:05:14.599652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.811 [2024-07-24 22:05:14.636083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val= 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val= 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val=0x1 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val= 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val= 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val=xor 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val=2 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val= 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val=software 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val=32 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val=32 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val=1 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val=Yes 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val= 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:19.811 22:05:14 -- accel/accel.sh@21 -- # val= 00:06:19.811 22:05:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # IFS=: 00:06:19.811 22:05:14 -- accel/accel.sh@20 -- # read -r var val 00:06:20.750 22:05:15 -- accel/accel.sh@21 -- # val= 00:06:20.750 22:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # IFS=: 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # read -r var val 00:06:20.750 22:05:15 -- accel/accel.sh@21 -- # val= 00:06:20.750 22:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # IFS=: 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # read -r var val 00:06:20.750 22:05:15 -- accel/accel.sh@21 -- # val= 00:06:20.750 22:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # IFS=: 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # read -r var val 00:06:20.750 22:05:15 -- accel/accel.sh@21 -- # val= 00:06:20.750 22:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # IFS=: 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # read -r var val 00:06:20.750 22:05:15 -- accel/accel.sh@21 -- # val= 00:06:20.750 22:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # IFS=: 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # read -r var val 00:06:20.750 22:05:15 -- accel/accel.sh@21 -- # val= 00:06:20.750 22:05:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # IFS=: 00:06:20.750 22:05:15 -- accel/accel.sh@20 -- # read -r var val 00:06:20.750 22:05:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:20.750 22:05:15 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:20.750 22:05:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.750 00:06:20.750 real 0m2.553s 00:06:20.750 user 0m2.332s 00:06:20.750 sys 0m0.217s 00:06:20.750 22:05:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.750 22:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:20.750 ************************************ 00:06:20.750 END TEST accel_xor 00:06:20.750 ************************************ 00:06:20.750 22:05:15 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:20.750 22:05:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:20.750 22:05:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:20.750 22:05:15 -- common/autotest_common.sh@10 -- # set +x 00:06:20.750 ************************************ 00:06:20.750 START TEST accel_xor 00:06:20.750 ************************************ 00:06:20.750 22:05:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:20.750 22:05:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.750 22:05:15 -- accel/accel.sh@17 -- # local accel_module 00:06:20.750 22:05:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:20.750 22:05:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:20.750 22:05:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.750 22:05:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:20.750 22:05:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.750 22:05:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.750 22:05:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:20.750 22:05:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:20.750 22:05:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:20.750 22:05:15 -- accel/accel.sh@42 -- # jq -r . 00:06:20.750 [2024-07-24 22:05:15.857213] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:20.750 [2024-07-24 22:05:15.857277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388433 ] 00:06:20.750 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.009 [2024-07-24 22:05:15.910592] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.009 [2024-07-24 22:05:15.947650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.390 22:05:17 -- accel/accel.sh@18 -- # out=' 00:06:22.390 SPDK Configuration: 00:06:22.390 Core mask: 0x1 00:06:22.390 00:06:22.390 Accel Perf Configuration: 00:06:22.390 Workload Type: xor 00:06:22.390 Source buffers: 3 00:06:22.390 Transfer size: 4096 bytes 00:06:22.390 Vector count 1 00:06:22.390 Module: software 00:06:22.390 Queue depth: 32 00:06:22.390 Allocate depth: 32 00:06:22.390 # threads/core: 1 00:06:22.390 Run time: 1 seconds 00:06:22.390 Verify: Yes 00:06:22.390 00:06:22.390 Running for 1 seconds... 00:06:22.390 00:06:22.390 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.390 ------------------------------------------------------------------------------------ 00:06:22.390 0,0 451968/s 1765 MiB/s 0 0 00:06:22.390 ==================================================================================== 00:06:22.390 Total 451968/s 1765 MiB/s 0 0' 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.390 22:05:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:22.390 22:05:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:22.390 22:05:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.390 22:05:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.390 22:05:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.390 22:05:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.390 22:05:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.390 22:05:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.390 22:05:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.390 22:05:17 -- accel/accel.sh@42 -- # jq -r . 00:06:22.390 [2024-07-24 22:05:17.129190] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:22.390 [2024-07-24 22:05:17.129239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388673 ] 00:06:22.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.390 [2024-07-24 22:05:17.181869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.390 [2024-07-24 22:05:17.218140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.390 22:05:17 -- accel/accel.sh@21 -- # val= 00:06:22.390 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.390 22:05:17 -- accel/accel.sh@21 -- # val= 00:06:22.390 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.390 22:05:17 -- accel/accel.sh@21 -- # val=0x1 00:06:22.390 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.390 22:05:17 -- accel/accel.sh@21 -- # val= 00:06:22.390 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.390 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.390 22:05:17 -- accel/accel.sh@21 -- # val= 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val=xor 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val=3 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val= 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val=software 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val=32 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val=32 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val=1 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val=Yes 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val= 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:22.391 22:05:17 -- accel/accel.sh@21 -- # val= 00:06:22.391 22:05:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # IFS=: 00:06:22.391 22:05:17 -- accel/accel.sh@20 -- # read -r var val 00:06:23.331 22:05:18 -- accel/accel.sh@21 -- # val= 00:06:23.331 22:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # IFS=: 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # read -r var val 00:06:23.331 22:05:18 -- accel/accel.sh@21 -- # val= 00:06:23.331 22:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # IFS=: 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # read -r var val 00:06:23.331 22:05:18 -- accel/accel.sh@21 -- # val= 00:06:23.331 22:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # IFS=: 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # read -r var val 00:06:23.331 22:05:18 -- accel/accel.sh@21 -- # val= 00:06:23.331 22:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # IFS=: 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # read -r var val 00:06:23.331 22:05:18 -- accel/accel.sh@21 -- # val= 00:06:23.331 22:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # IFS=: 00:06:23.331 22:05:18 -- accel/accel.sh@20 -- # read -r var val 00:06:23.331 22:05:18 -- accel/accel.sh@21 -- # val= 00:06:23.332 22:05:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.332 22:05:18 -- accel/accel.sh@20 -- # IFS=: 00:06:23.332 22:05:18 -- accel/accel.sh@20 -- # read -r var val 00:06:23.332 22:05:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:23.332 22:05:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:23.332 22:05:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.332 00:06:23.332 real 0m2.553s 00:06:23.332 user 0m2.347s 00:06:23.332 sys 0m0.204s 00:06:23.332 22:05:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.332 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:23.332 ************************************ 00:06:23.332 END TEST accel_xor 00:06:23.332 ************************************ 00:06:23.332 22:05:18 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:23.332 22:05:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:23.332 22:05:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.332 22:05:18 -- common/autotest_common.sh@10 -- # set +x 00:06:23.332 ************************************ 00:06:23.332 START TEST accel_dif_verify 00:06:23.332 ************************************ 00:06:23.332 22:05:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:23.332 22:05:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.332 22:05:18 -- accel/accel.sh@17 -- # local accel_module 00:06:23.332 22:05:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:23.332 22:05:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:23.332 22:05:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.332 22:05:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.332 22:05:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.332 22:05:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.332 22:05:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.332 22:05:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.332 22:05:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.332 22:05:18 -- accel/accel.sh@42 -- # jq -r . 00:06:23.332 [2024-07-24 22:05:18.447460] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:23.332 [2024-07-24 22:05:18.447524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3388920 ] 00:06:23.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.594 [2024-07-24 22:05:18.502640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.594 [2024-07-24 22:05:18.538385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.979 22:05:19 -- accel/accel.sh@18 -- # out=' 00:06:24.979 SPDK Configuration: 00:06:24.979 Core mask: 0x1 00:06:24.979 00:06:24.979 Accel Perf Configuration: 00:06:24.979 Workload Type: dif_verify 00:06:24.979 Vector size: 4096 bytes 00:06:24.979 Transfer size: 4096 bytes 00:06:24.979 Block size: 512 bytes 00:06:24.979 Metadata size: 8 bytes 00:06:24.979 Vector count 1 00:06:24.979 Module: software 00:06:24.979 Queue depth: 32 00:06:24.979 Allocate depth: 32 00:06:24.979 # threads/core: 1 00:06:24.979 Run time: 1 seconds 00:06:24.979 Verify: No 00:06:24.979 00:06:24.979 Running for 1 seconds... 00:06:24.979 00:06:24.979 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:24.979 ------------------------------------------------------------------------------------ 00:06:24.979 0,0 130752/s 518 MiB/s 0 0 00:06:24.979 ==================================================================================== 00:06:24.979 Total 130752/s 510 MiB/s 0 0' 00:06:24.979 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.979 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.979 22:05:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:24.980 22:05:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:24.980 22:05:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.980 22:05:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.980 22:05:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.980 22:05:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.980 22:05:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.980 22:05:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.980 22:05:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.980 22:05:19 -- accel/accel.sh@42 -- # jq -r . 00:06:24.980 [2024-07-24 22:05:19.734030] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:24.980 [2024-07-24 22:05:19.734115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389117 ] 00:06:24.980 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.980 [2024-07-24 22:05:19.790114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.980 [2024-07-24 22:05:19.826710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val= 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val= 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val=0x1 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val= 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val= 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val=dif_verify 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val= 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val=software 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@23 -- # accel_module=software 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val=32 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val=32 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val=1 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val=No 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val= 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:24.980 22:05:19 -- accel/accel.sh@21 -- # val= 00:06:24.980 22:05:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # IFS=: 00:06:24.980 22:05:19 -- accel/accel.sh@20 -- # read -r var val 00:06:25.920 22:05:20 -- accel/accel.sh@21 -- # val= 00:06:25.920 22:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # IFS=: 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # read -r var val 00:06:25.920 22:05:20 -- accel/accel.sh@21 -- # val= 00:06:25.920 22:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # IFS=: 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # read -r var val 00:06:25.920 22:05:20 -- accel/accel.sh@21 -- # val= 00:06:25.920 22:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # IFS=: 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # read -r var val 00:06:25.920 22:05:20 -- accel/accel.sh@21 -- # val= 00:06:25.920 22:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # IFS=: 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # read -r var val 00:06:25.920 22:05:20 -- accel/accel.sh@21 -- # val= 00:06:25.920 22:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # IFS=: 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # read -r var val 00:06:25.920 22:05:20 -- accel/accel.sh@21 -- # val= 00:06:25.920 22:05:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # IFS=: 00:06:25.920 22:05:20 -- accel/accel.sh@20 -- # read -r var val 00:06:25.920 22:05:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.920 22:05:20 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:25.920 22:05:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.920 00:06:25.920 real 0m2.574s 00:06:25.920 user 0m2.356s 00:06:25.920 sys 0m0.216s 00:06:25.920 22:05:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.920 22:05:20 -- common/autotest_common.sh@10 -- # set +x 00:06:25.920 ************************************ 00:06:25.920 END TEST accel_dif_verify 00:06:25.920 ************************************ 00:06:25.920 22:05:21 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:25.920 22:05:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:25.920 22:05:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.920 22:05:21 -- common/autotest_common.sh@10 -- # set +x 00:06:25.920 ************************************ 00:06:25.920 START TEST accel_dif_generate 00:06:25.920 ************************************ 00:06:25.920 22:05:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:25.920 22:05:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.920 22:05:21 -- accel/accel.sh@17 -- # local accel_module 00:06:25.920 22:05:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:25.920 22:05:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:25.920 22:05:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.920 22:05:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.920 22:05:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.920 22:05:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.920 22:05:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.920 22:05:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.920 22:05:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.920 22:05:21 -- accel/accel.sh@42 -- # jq -r . 00:06:25.920 [2024-07-24 22:05:21.051655] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:25.920 [2024-07-24 22:05:21.051730] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389329 ] 00:06:26.180 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.180 [2024-07-24 22:05:21.106390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.180 [2024-07-24 22:05:21.144895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.562 22:05:22 -- accel/accel.sh@18 -- # out=' 00:06:27.562 SPDK Configuration: 00:06:27.562 Core mask: 0x1 00:06:27.562 00:06:27.562 Accel Perf Configuration: 00:06:27.562 Workload Type: dif_generate 00:06:27.562 Vector size: 4096 bytes 00:06:27.562 Transfer size: 4096 bytes 00:06:27.562 Block size: 512 bytes 00:06:27.562 Metadata size: 8 bytes 00:06:27.562 Vector count 1 00:06:27.562 Module: software 00:06:27.562 Queue depth: 32 00:06:27.562 Allocate depth: 32 00:06:27.562 # threads/core: 1 00:06:27.562 Run time: 1 seconds 00:06:27.562 Verify: No 00:06:27.562 00:06:27.562 Running for 1 seconds... 00:06:27.562 00:06:27.562 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.562 ------------------------------------------------------------------------------------ 00:06:27.562 0,0 157600/s 625 MiB/s 0 0 00:06:27.562 ==================================================================================== 00:06:27.562 Total 157600/s 615 MiB/s 0 0' 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:27.562 22:05:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:27.562 22:05:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.562 22:05:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.562 22:05:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.562 22:05:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.562 22:05:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.562 22:05:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.562 22:05:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.562 22:05:22 -- accel/accel.sh@42 -- # jq -r . 00:06:27.562 [2024-07-24 22:05:22.328194] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:27.562 [2024-07-24 22:05:22.328244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389514 ] 00:06:27.562 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.562 [2024-07-24 22:05:22.381076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.562 [2024-07-24 22:05:22.417675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val= 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val= 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val=0x1 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val= 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val= 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val=dif_generate 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val= 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val=software 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val=32 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val=32 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val=1 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val=No 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val= 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:27.562 22:05:22 -- accel/accel.sh@21 -- # val= 00:06:27.562 22:05:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # IFS=: 00:06:27.562 22:05:22 -- accel/accel.sh@20 -- # read -r var val 00:06:28.498 22:05:23 -- accel/accel.sh@21 -- # val= 00:06:28.498 22:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:28.498 22:05:23 -- accel/accel.sh@21 -- # val= 00:06:28.498 22:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:28.498 22:05:23 -- accel/accel.sh@21 -- # val= 00:06:28.498 22:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:28.498 22:05:23 -- accel/accel.sh@21 -- # val= 00:06:28.498 22:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:28.498 22:05:23 -- accel/accel.sh@21 -- # val= 00:06:28.498 22:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:28.498 22:05:23 -- accel/accel.sh@21 -- # val= 00:06:28.498 22:05:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # IFS=: 00:06:28.498 22:05:23 -- accel/accel.sh@20 -- # read -r var val 00:06:28.498 22:05:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.498 22:05:23 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:28.498 22:05:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.498 00:06:28.498 real 0m2.560s 00:06:28.498 user 0m2.351s 00:06:28.498 sys 0m0.207s 00:06:28.498 22:05:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.498 22:05:23 -- common/autotest_common.sh@10 -- # set +x 00:06:28.498 ************************************ 00:06:28.498 END TEST accel_dif_generate 00:06:28.498 ************************************ 00:06:28.498 22:05:23 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:28.498 22:05:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:28.498 22:05:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.498 22:05:23 -- common/autotest_common.sh@10 -- # set +x 00:06:28.498 ************************************ 00:06:28.498 START TEST accel_dif_generate_copy 00:06:28.498 ************************************ 00:06:28.498 22:05:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:28.498 22:05:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.498 22:05:23 -- accel/accel.sh@17 -- # local accel_module 00:06:28.498 22:05:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:28.498 22:05:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:28.498 22:05:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.499 22:05:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.499 22:05:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.499 22:05:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.499 22:05:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.499 22:05:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.499 22:05:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.499 22:05:23 -- accel/accel.sh@42 -- # jq -r . 00:06:28.757 [2024-07-24 22:05:23.644912] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:28.757 [2024-07-24 22:05:23.644987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389712 ] 00:06:28.757 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.757 [2024-07-24 22:05:23.701935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.757 [2024-07-24 22:05:23.739436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.136 22:05:24 -- accel/accel.sh@18 -- # out=' 00:06:30.136 SPDK Configuration: 00:06:30.136 Core mask: 0x1 00:06:30.136 00:06:30.136 Accel Perf Configuration: 00:06:30.136 Workload Type: dif_generate_copy 00:06:30.136 Vector size: 4096 bytes 00:06:30.136 Transfer size: 4096 bytes 00:06:30.136 Vector count 1 00:06:30.136 Module: software 00:06:30.136 Queue depth: 32 00:06:30.136 Allocate depth: 32 00:06:30.136 # threads/core: 1 00:06:30.136 Run time: 1 seconds 00:06:30.136 Verify: No 00:06:30.136 00:06:30.136 Running for 1 seconds... 00:06:30.136 00:06:30.136 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:30.136 ------------------------------------------------------------------------------------ 00:06:30.136 0,0 121664/s 482 MiB/s 0 0 00:06:30.136 ==================================================================================== 00:06:30.136 Total 121664/s 475 MiB/s 0 0' 00:06:30.136 22:05:24 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:24 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:30.136 22:05:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:30.136 22:05:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.136 22:05:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.136 22:05:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.136 22:05:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.136 22:05:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.136 22:05:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.136 22:05:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.136 22:05:24 -- accel/accel.sh@42 -- # jq -r . 00:06:30.136 [2024-07-24 22:05:24.922594] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:30.136 [2024-07-24 22:05:24.922653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389910 ] 00:06:30.136 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.136 [2024-07-24 22:05:24.976640] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.136 [2024-07-24 22:05:25.012979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val= 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val= 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val=0x1 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val= 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val= 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val= 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val=software 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val=32 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val=32 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val=1 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val=No 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val= 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:30.136 22:05:25 -- accel/accel.sh@21 -- # val= 00:06:30.136 22:05:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # IFS=: 00:06:30.136 22:05:25 -- accel/accel.sh@20 -- # read -r var val 00:06:31.073 22:05:26 -- accel/accel.sh@21 -- # val= 00:06:31.073 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:31.073 22:05:26 -- accel/accel.sh@21 -- # val= 00:06:31.073 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:31.073 22:05:26 -- accel/accel.sh@21 -- # val= 00:06:31.073 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:31.073 22:05:26 -- accel/accel.sh@21 -- # val= 00:06:31.073 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:31.073 22:05:26 -- accel/accel.sh@21 -- # val= 00:06:31.073 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:31.073 22:05:26 -- accel/accel.sh@21 -- # val= 00:06:31.073 22:05:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # IFS=: 00:06:31.073 22:05:26 -- accel/accel.sh@20 -- # read -r var val 00:06:31.073 22:05:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:31.073 22:05:26 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:31.073 22:05:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.073 00:06:31.073 real 0m2.562s 00:06:31.073 user 0m2.327s 00:06:31.073 sys 0m0.231s 00:06:31.073 22:05:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.073 22:05:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.073 ************************************ 00:06:31.073 END TEST accel_dif_generate_copy 00:06:31.073 ************************************ 00:06:31.332 22:05:26 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:31.332 22:05:26 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.332 22:05:26 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:31.332 22:05:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:31.332 22:05:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.332 ************************************ 00:06:31.333 START TEST accel_comp 00:06:31.333 ************************************ 00:06:31.333 22:05:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.333 22:05:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.333 22:05:26 -- accel/accel.sh@17 -- # local accel_module 00:06:31.333 22:05:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.333 22:05:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.333 22:05:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.333 22:05:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.333 22:05:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.333 22:05:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.333 22:05:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.333 22:05:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.333 22:05:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.333 22:05:26 -- accel/accel.sh@42 -- # jq -r . 00:06:31.333 [2024-07-24 22:05:26.237080] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:31.333 [2024-07-24 22:05:26.237159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390167 ] 00:06:31.333 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.333 [2024-07-24 22:05:26.291447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.333 [2024-07-24 22:05:26.328708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.756 22:05:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:32.756 00:06:32.756 SPDK Configuration: 00:06:32.756 Core mask: 0x1 00:06:32.756 00:06:32.756 Accel Perf Configuration: 00:06:32.756 Workload Type: compress 00:06:32.756 Transfer size: 4096 bytes 00:06:32.756 Vector count 1 00:06:32.756 Module: software 00:06:32.756 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.756 Queue depth: 32 00:06:32.756 Allocate depth: 32 00:06:32.756 # threads/core: 1 00:06:32.756 Run time: 1 seconds 00:06:32.756 Verify: No 00:06:32.756 00:06:32.756 Running for 1 seconds... 00:06:32.756 00:06:32.756 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.756 ------------------------------------------------------------------------------------ 00:06:32.756 0,0 62048/s 258 MiB/s 0 0 00:06:32.756 ==================================================================================== 00:06:32.756 Total 62048/s 242 MiB/s 0 0' 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.756 22:05:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.756 22:05:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.756 22:05:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.756 22:05:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.756 22:05:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.756 22:05:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.756 22:05:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.756 22:05:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.756 22:05:27 -- accel/accel.sh@42 -- # jq -r . 00:06:32.756 [2024-07-24 22:05:27.525862] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:32.756 [2024-07-24 22:05:27.525938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390401 ] 00:06:32.756 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.756 [2024-07-24 22:05:27.582498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.756 [2024-07-24 22:05:27.618643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val= 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val= 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val= 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val=0x1 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val= 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val= 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val=compress 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val= 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val=software 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val=32 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val=32 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val=1 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val=No 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val= 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:32.756 22:05:27 -- accel/accel.sh@21 -- # val= 00:06:32.756 22:05:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # IFS=: 00:06:32.756 22:05:27 -- accel/accel.sh@20 -- # read -r var val 00:06:33.720 22:05:28 -- accel/accel.sh@21 -- # val= 00:06:33.720 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:06:33.720 22:05:28 -- accel/accel.sh@21 -- # val= 00:06:33.720 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:06:33.720 22:05:28 -- accel/accel.sh@21 -- # val= 00:06:33.720 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:06:33.720 22:05:28 -- accel/accel.sh@21 -- # val= 00:06:33.720 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:06:33.720 22:05:28 -- accel/accel.sh@21 -- # val= 00:06:33.720 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:06:33.720 22:05:28 -- accel/accel.sh@21 -- # val= 00:06:33.720 22:05:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # IFS=: 00:06:33.720 22:05:28 -- accel/accel.sh@20 -- # read -r var val 00:06:33.720 22:05:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.720 22:05:28 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:33.720 22:05:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.720 00:06:33.720 real 0m2.576s 00:06:33.720 user 0m2.359s 00:06:33.720 sys 0m0.213s 00:06:33.720 22:05:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.720 22:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:33.720 ************************************ 00:06:33.720 END TEST accel_comp 00:06:33.720 ************************************ 00:06:33.720 22:05:28 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.720 22:05:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:33.720 22:05:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.720 22:05:28 -- common/autotest_common.sh@10 -- # set +x 00:06:33.720 ************************************ 00:06:33.720 START TEST accel_decomp 00:06:33.720 ************************************ 00:06:33.720 22:05:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.720 22:05:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.720 22:05:28 -- accel/accel.sh@17 -- # local accel_module 00:06:33.720 22:05:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.720 22:05:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.721 22:05:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.721 22:05:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.721 22:05:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.721 22:05:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.721 22:05:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.721 22:05:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.721 22:05:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.721 22:05:28 -- accel/accel.sh@42 -- # jq -r . 00:06:33.721 [2024-07-24 22:05:28.842907] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:33.721 [2024-07-24 22:05:28.842986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390650 ] 00:06:33.979 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.979 [2024-07-24 22:05:28.897192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.979 [2024-07-24 22:05:28.934640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.360 22:05:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:35.360 00:06:35.360 SPDK Configuration: 00:06:35.360 Core mask: 0x1 00:06:35.360 00:06:35.360 Accel Perf Configuration: 00:06:35.360 Workload Type: decompress 00:06:35.360 Transfer size: 4096 bytes 00:06:35.360 Vector count 1 00:06:35.360 Module: software 00:06:35.360 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.360 Queue depth: 32 00:06:35.360 Allocate depth: 32 00:06:35.360 # threads/core: 1 00:06:35.360 Run time: 1 seconds 00:06:35.360 Verify: Yes 00:06:35.360 00:06:35.360 Running for 1 seconds... 00:06:35.360 00:06:35.360 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.360 ------------------------------------------------------------------------------------ 00:06:35.360 0,0 74592/s 137 MiB/s 0 0 00:06:35.360 ==================================================================================== 00:06:35.360 Total 74592/s 291 MiB/s 0 0' 00:06:35.360 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.360 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.360 22:05:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.360 22:05:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.360 22:05:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.360 22:05:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.360 22:05:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.360 22:05:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.360 22:05:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.360 22:05:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.360 22:05:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.360 22:05:30 -- accel/accel.sh@42 -- # jq -r . 00:06:35.360 [2024-07-24 22:05:30.129536] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:35.360 [2024-07-24 22:05:30.129603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3390884 ] 00:06:35.360 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.360 [2024-07-24 22:05:30.183864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.360 [2024-07-24 22:05:30.220420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.360 22:05:30 -- accel/accel.sh@21 -- # val= 00:06:35.360 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.360 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.360 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.360 22:05:30 -- accel/accel.sh@21 -- # val= 00:06:35.360 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.360 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.360 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.360 22:05:30 -- accel/accel.sh@21 -- # val= 00:06:35.360 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.360 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.360 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val=0x1 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val= 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val= 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val=decompress 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val= 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val=software 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val=32 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val=32 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val=1 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val=Yes 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val= 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:35.361 22:05:30 -- accel/accel.sh@21 -- # val= 00:06:35.361 22:05:30 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # IFS=: 00:06:35.361 22:05:30 -- accel/accel.sh@20 -- # read -r var val 00:06:36.299 22:05:31 -- accel/accel.sh@21 -- # val= 00:06:36.299 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:06:36.299 22:05:31 -- accel/accel.sh@21 -- # val= 00:06:36.299 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:06:36.299 22:05:31 -- accel/accel.sh@21 -- # val= 00:06:36.299 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:06:36.299 22:05:31 -- accel/accel.sh@21 -- # val= 00:06:36.299 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:06:36.299 22:05:31 -- accel/accel.sh@21 -- # val= 00:06:36.299 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:06:36.299 22:05:31 -- accel/accel.sh@21 -- # val= 00:06:36.299 22:05:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # IFS=: 00:06:36.299 22:05:31 -- accel/accel.sh@20 -- # read -r var val 00:06:36.299 22:05:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.299 22:05:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:36.299 22:05:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.299 00:06:36.299 real 0m2.572s 00:06:36.299 user 0m2.349s 00:06:36.299 sys 0m0.220s 00:06:36.299 22:05:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.299 22:05:31 -- common/autotest_common.sh@10 -- # set +x 00:06:36.299 ************************************ 00:06:36.299 END TEST accel_decomp 00:06:36.299 ************************************ 00:06:36.299 22:05:31 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.299 22:05:31 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:36.299 22:05:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.299 22:05:31 -- common/autotest_common.sh@10 -- # set +x 00:06:36.299 ************************************ 00:06:36.299 START TEST accel_decmop_full 00:06:36.299 ************************************ 00:06:36.299 22:05:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.299 22:05:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.299 22:05:31 -- accel/accel.sh@17 -- # local accel_module 00:06:36.299 22:05:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.299 22:05:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:36.299 22:05:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.299 22:05:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.299 22:05:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.299 22:05:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.299 22:05:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.299 22:05:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.299 22:05:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.299 22:05:31 -- accel/accel.sh@42 -- # jq -r . 00:06:36.559 [2024-07-24 22:05:31.447217] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:36.559 [2024-07-24 22:05:31.447291] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391137 ] 00:06:36.559 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.559 [2024-07-24 22:05:31.502920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.559 [2024-07-24 22:05:31.539652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.940 22:05:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:37.940 00:06:37.940 SPDK Configuration: 00:06:37.940 Core mask: 0x1 00:06:37.940 00:06:37.940 Accel Perf Configuration: 00:06:37.940 Workload Type: decompress 00:06:37.940 Transfer size: 111250 bytes 00:06:37.940 Vector count 1 00:06:37.940 Module: software 00:06:37.940 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.940 Queue depth: 32 00:06:37.940 Allocate depth: 32 00:06:37.940 # threads/core: 1 00:06:37.940 Run time: 1 seconds 00:06:37.940 Verify: Yes 00:06:37.940 00:06:37.940 Running for 1 seconds... 00:06:37.940 00:06:37.940 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.940 ------------------------------------------------------------------------------------ 00:06:37.940 0,0 4928/s 203 MiB/s 0 0 00:06:37.940 ==================================================================================== 00:06:37.940 Total 4928/s 522 MiB/s 0 0' 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.940 22:05:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:37.940 22:05:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.940 22:05:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.940 22:05:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.940 22:05:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.940 22:05:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.940 22:05:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.940 22:05:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.940 22:05:32 -- accel/accel.sh@42 -- # jq -r . 00:06:37.940 [2024-07-24 22:05:32.730424] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:37.940 [2024-07-24 22:05:32.730484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391375 ] 00:06:37.940 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.940 [2024-07-24 22:05:32.784347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.940 [2024-07-24 22:05:32.820690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val= 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val= 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val= 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val=0x1 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val= 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val= 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val=decompress 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val= 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val=software 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val=32 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val=32 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val=1 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val=Yes 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val= 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:37.940 22:05:32 -- accel/accel.sh@21 -- # val= 00:06:37.940 22:05:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # IFS=: 00:06:37.940 22:05:32 -- accel/accel.sh@20 -- # read -r var val 00:06:38.878 22:05:33 -- accel/accel.sh@21 -- # val= 00:06:38.878 22:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # IFS=: 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # read -r var val 00:06:38.878 22:05:33 -- accel/accel.sh@21 -- # val= 00:06:38.878 22:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # IFS=: 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # read -r var val 00:06:38.878 22:05:33 -- accel/accel.sh@21 -- # val= 00:06:38.878 22:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # IFS=: 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # read -r var val 00:06:38.878 22:05:33 -- accel/accel.sh@21 -- # val= 00:06:38.878 22:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # IFS=: 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # read -r var val 00:06:38.878 22:05:33 -- accel/accel.sh@21 -- # val= 00:06:38.878 22:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # IFS=: 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # read -r var val 00:06:38.878 22:05:33 -- accel/accel.sh@21 -- # val= 00:06:38.878 22:05:33 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # IFS=: 00:06:38.878 22:05:33 -- accel/accel.sh@20 -- # read -r var val 00:06:38.878 22:05:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.878 22:05:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:38.878 22:05:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.878 00:06:38.878 real 0m2.577s 00:06:38.878 user 0m2.362s 00:06:38.878 sys 0m0.211s 00:06:38.878 22:05:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.878 22:05:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.878 ************************************ 00:06:38.878 END TEST accel_decmop_full 00:06:38.878 ************************************ 00:06:39.139 22:05:34 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.139 22:05:34 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:39.139 22:05:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.139 22:05:34 -- common/autotest_common.sh@10 -- # set +x 00:06:39.139 ************************************ 00:06:39.139 START TEST accel_decomp_mcore 00:06:39.139 ************************************ 00:06:39.139 22:05:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.139 22:05:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.139 22:05:34 -- accel/accel.sh@17 -- # local accel_module 00:06:39.139 22:05:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.139 22:05:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:39.139 22:05:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.139 22:05:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.139 22:05:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.139 22:05:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.139 22:05:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.139 22:05:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.139 22:05:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.139 22:05:34 -- accel/accel.sh@42 -- # jq -r . 00:06:39.139 [2024-07-24 22:05:34.061383] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:39.139 [2024-07-24 22:05:34.061459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391623 ] 00:06:39.139 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.139 [2024-07-24 22:05:34.119450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.139 [2024-07-24 22:05:34.157379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.139 [2024-07-24 22:05:34.157479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.139 [2024-07-24 22:05:34.157555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.139 [2024-07-24 22:05:34.157556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.522 22:05:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:40.522 00:06:40.522 SPDK Configuration: 00:06:40.522 Core mask: 0xf 00:06:40.522 00:06:40.522 Accel Perf Configuration: 00:06:40.522 Workload Type: decompress 00:06:40.522 Transfer size: 4096 bytes 00:06:40.522 Vector count 1 00:06:40.522 Module: software 00:06:40.522 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.522 Queue depth: 32 00:06:40.522 Allocate depth: 32 00:06:40.522 # threads/core: 1 00:06:40.522 Run time: 1 seconds 00:06:40.522 Verify: Yes 00:06:40.522 00:06:40.522 Running for 1 seconds... 00:06:40.522 00:06:40.522 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.522 ------------------------------------------------------------------------------------ 00:06:40.522 0,0 61536/s 113 MiB/s 0 0 00:06:40.522 3,0 62016/s 114 MiB/s 0 0 00:06:40.522 2,0 62048/s 114 MiB/s 0 0 00:06:40.522 1,0 61952/s 114 MiB/s 0 0 00:06:40.523 ==================================================================================== 00:06:40.523 Total 247552/s 967 MiB/s 0 0' 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.523 22:05:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:40.523 22:05:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.523 22:05:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.523 22:05:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.523 22:05:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.523 22:05:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.523 22:05:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.523 22:05:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.523 22:05:35 -- accel/accel.sh@42 -- # jq -r . 00:06:40.523 [2024-07-24 22:05:35.359905] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:40.523 [2024-07-24 22:05:35.359983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391861 ] 00:06:40.523 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.523 [2024-07-24 22:05:35.414782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.523 [2024-07-24 22:05:35.453362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.523 [2024-07-24 22:05:35.453463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.523 [2024-07-24 22:05:35.453528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.523 [2024-07-24 22:05:35.453529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val= 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val= 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val= 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val=0xf 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val= 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val= 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val=decompress 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val= 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val=software 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val=32 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val=32 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val=1 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val=Yes 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val= 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:40.523 22:05:35 -- accel/accel.sh@21 -- # val= 00:06:40.523 22:05:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # IFS=: 00:06:40.523 22:05:35 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@21 -- # val= 00:06:41.903 22:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@21 -- # val= 00:06:41.903 22:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@21 -- # val= 00:06:41.903 22:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@21 -- # val= 00:06:41.903 22:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@21 -- # val= 00:06:41.903 22:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@21 -- # val= 00:06:41.903 22:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@21 -- # val= 00:06:41.903 22:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@21 -- # val= 00:06:41.903 22:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@21 -- # val= 00:06:41.903 22:05:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.903 22:05:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.903 22:05:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.903 22:05:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:41.903 22:05:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.903 00:06:41.903 real 0m2.603s 00:06:41.903 user 0m9.017s 00:06:41.903 sys 0m0.250s 00:06:41.903 22:05:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.903 22:05:36 -- common/autotest_common.sh@10 -- # set +x 00:06:41.903 ************************************ 00:06:41.903 END TEST accel_decomp_mcore 00:06:41.903 ************************************ 00:06:41.903 22:05:36 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.903 22:05:36 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:41.903 22:05:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.903 22:05:36 -- common/autotest_common.sh@10 -- # set +x 00:06:41.903 ************************************ 00:06:41.903 START TEST accel_decomp_full_mcore 00:06:41.903 ************************************ 00:06:41.903 22:05:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.903 22:05:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.903 22:05:36 -- accel/accel.sh@17 -- # local accel_module 00:06:41.903 22:05:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.903 22:05:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:41.903 22:05:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.903 22:05:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.903 22:05:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.903 22:05:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.903 22:05:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.903 22:05:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.903 22:05:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.903 22:05:36 -- accel/accel.sh@42 -- # jq -r . 00:06:41.903 [2024-07-24 22:05:36.702691] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:41.903 [2024-07-24 22:05:36.702767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392117 ] 00:06:41.903 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.903 [2024-07-24 22:05:36.758944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.903 [2024-07-24 22:05:36.798085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.903 [2024-07-24 22:05:36.798124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.903 [2024-07-24 22:05:36.798210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:41.903 [2024-07-24 22:05:36.798212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.285 22:05:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:43.285 00:06:43.285 SPDK Configuration: 00:06:43.285 Core mask: 0xf 00:06:43.285 00:06:43.285 Accel Perf Configuration: 00:06:43.285 Workload Type: decompress 00:06:43.285 Transfer size: 111250 bytes 00:06:43.285 Vector count 1 00:06:43.285 Module: software 00:06:43.285 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.285 Queue depth: 32 00:06:43.285 Allocate depth: 32 00:06:43.285 # threads/core: 1 00:06:43.285 Run time: 1 seconds 00:06:43.285 Verify: Yes 00:06:43.285 00:06:43.285 Running for 1 seconds... 00:06:43.285 00:06:43.285 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:43.285 ------------------------------------------------------------------------------------ 00:06:43.285 0,0 4640/s 191 MiB/s 0 0 00:06:43.285 3,0 4672/s 192 MiB/s 0 0 00:06:43.285 2,0 4672/s 192 MiB/s 0 0 00:06:43.285 1,0 4672/s 192 MiB/s 0 0 00:06:43.285 ==================================================================================== 00:06:43.285 Total 18656/s 1979 MiB/s 0 0' 00:06:43.285 22:05:37 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:37 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.286 22:05:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.286 22:05:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.286 22:05:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.286 22:05:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.286 22:05:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.286 22:05:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.286 22:05:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.286 22:05:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.286 22:05:37 -- accel/accel.sh@42 -- # jq -r . 00:06:43.286 [2024-07-24 22:05:38.011631] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:43.286 [2024-07-24 22:05:38.011688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392354 ] 00:06:43.286 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.286 [2024-07-24 22:05:38.065167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.286 [2024-07-24 22:05:38.103515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.286 [2024-07-24 22:05:38.103612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.286 [2024-07-24 22:05:38.103697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.286 [2024-07-24 22:05:38.103698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val=0xf 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val=decompress 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val=software 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val=32 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val=32 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val=1 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val=Yes 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:43.286 22:05:38 -- accel/accel.sh@21 -- # val= 00:06:43.286 22:05:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # IFS=: 00:06:43.286 22:05:38 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@21 -- # val= 00:06:44.225 22:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@21 -- # val= 00:06:44.225 22:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@21 -- # val= 00:06:44.225 22:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@21 -- # val= 00:06:44.225 22:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@21 -- # val= 00:06:44.225 22:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@21 -- # val= 00:06:44.225 22:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@21 -- # val= 00:06:44.225 22:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@21 -- # val= 00:06:44.225 22:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@21 -- # val= 00:06:44.225 22:05:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.225 22:05:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.225 22:05:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.225 22:05:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:44.225 22:05:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.225 00:06:44.225 real 0m2.624s 00:06:44.225 user 0m9.117s 00:06:44.225 sys 0m0.237s 00:06:44.225 22:05:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.225 22:05:39 -- common/autotest_common.sh@10 -- # set +x 00:06:44.225 ************************************ 00:06:44.225 END TEST accel_decomp_full_mcore 00:06:44.225 ************************************ 00:06:44.225 22:05:39 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.225 22:05:39 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:44.225 22:05:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.225 22:05:39 -- common/autotest_common.sh@10 -- # set +x 00:06:44.225 ************************************ 00:06:44.225 START TEST accel_decomp_mthread 00:06:44.225 ************************************ 00:06:44.225 22:05:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.225 22:05:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.225 22:05:39 -- accel/accel.sh@17 -- # local accel_module 00:06:44.225 22:05:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.226 22:05:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:44.226 22:05:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.226 22:05:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.226 22:05:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.226 22:05:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.226 22:05:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.226 22:05:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.226 22:05:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.226 22:05:39 -- accel/accel.sh@42 -- # jq -r . 00:06:44.485 [2024-07-24 22:05:39.364601] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:44.485 [2024-07-24 22:05:39.364677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392608 ] 00:06:44.485 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.485 [2024-07-24 22:05:39.419634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.485 [2024-07-24 22:05:39.456263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.867 22:05:40 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:45.867 00:06:45.867 SPDK Configuration: 00:06:45.867 Core mask: 0x1 00:06:45.867 00:06:45.867 Accel Perf Configuration: 00:06:45.867 Workload Type: decompress 00:06:45.867 Transfer size: 4096 bytes 00:06:45.867 Vector count 1 00:06:45.867 Module: software 00:06:45.867 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.867 Queue depth: 32 00:06:45.867 Allocate depth: 32 00:06:45.867 # threads/core: 2 00:06:45.867 Run time: 1 seconds 00:06:45.867 Verify: Yes 00:06:45.867 00:06:45.867 Running for 1 seconds... 00:06:45.867 00:06:45.867 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.867 ------------------------------------------------------------------------------------ 00:06:45.867 0,1 37440/s 69 MiB/s 0 0 00:06:45.867 0,0 37344/s 68 MiB/s 0 0 00:06:45.867 ==================================================================================== 00:06:45.867 Total 74784/s 292 MiB/s 0 0' 00:06:45.867 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.867 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.867 22:05:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.867 22:05:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.867 22:05:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.867 22:05:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.867 22:05:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.867 22:05:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.867 22:05:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.867 22:05:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.867 22:05:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.867 22:05:40 -- accel/accel.sh@42 -- # jq -r . 00:06:45.867 [2024-07-24 22:05:40.655283] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:45.867 [2024-07-24 22:05:40.655358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3392850 ] 00:06:45.867 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.867 [2024-07-24 22:05:40.711525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.867 [2024-07-24 22:05:40.747203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.867 22:05:40 -- accel/accel.sh@21 -- # val= 00:06:45.867 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.867 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.867 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val= 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val= 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val=0x1 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val= 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val= 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val=decompress 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val= 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val=software 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val=32 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val=32 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val=2 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val=Yes 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val= 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.868 22:05:40 -- accel/accel.sh@21 -- # val= 00:06:45.868 22:05:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.868 22:05:40 -- accel/accel.sh@20 -- # read -r var val 00:06:46.806 22:05:41 -- accel/accel.sh@21 -- # val= 00:06:46.806 22:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.806 22:05:41 -- accel/accel.sh@21 -- # val= 00:06:46.806 22:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.806 22:05:41 -- accel/accel.sh@21 -- # val= 00:06:46.806 22:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.806 22:05:41 -- accel/accel.sh@21 -- # val= 00:06:46.806 22:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.806 22:05:41 -- accel/accel.sh@21 -- # val= 00:06:46.806 22:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.806 22:05:41 -- accel/accel.sh@21 -- # val= 00:06:46.806 22:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.806 22:05:41 -- accel/accel.sh@21 -- # val= 00:06:46.806 22:05:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.806 22:05:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.806 22:05:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.806 22:05:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:46.806 22:05:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.806 00:06:46.806 real 0m2.587s 00:06:46.806 user 0m2.368s 00:06:46.806 sys 0m0.226s 00:06:46.806 22:05:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.806 22:05:41 -- common/autotest_common.sh@10 -- # set +x 00:06:46.806 ************************************ 00:06:46.806 END TEST accel_decomp_mthread 00:06:46.806 ************************************ 00:06:47.066 22:05:41 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.066 22:05:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:47.066 22:05:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.066 22:05:41 -- common/autotest_common.sh@10 -- # set +x 00:06:47.066 ************************************ 00:06:47.066 START TEST accel_deomp_full_mthread 00:06:47.066 ************************************ 00:06:47.066 22:05:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.066 22:05:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.066 22:05:41 -- accel/accel.sh@17 -- # local accel_module 00:06:47.066 22:05:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.066 22:05:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:47.066 22:05:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.066 22:05:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.066 22:05:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.066 22:05:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.066 22:05:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.066 22:05:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.066 22:05:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.066 22:05:41 -- accel/accel.sh@42 -- # jq -r . 00:06:47.066 [2024-07-24 22:05:41.988875] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:47.066 [2024-07-24 22:05:41.988942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393103 ] 00:06:47.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.066 [2024-07-24 22:05:42.042563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.066 [2024-07-24 22:05:42.078856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.442 22:05:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:48.442 00:06:48.442 SPDK Configuration: 00:06:48.442 Core mask: 0x1 00:06:48.442 00:06:48.442 Accel Perf Configuration: 00:06:48.442 Workload Type: decompress 00:06:48.442 Transfer size: 111250 bytes 00:06:48.442 Vector count 1 00:06:48.442 Module: software 00:06:48.442 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.442 Queue depth: 32 00:06:48.442 Allocate depth: 32 00:06:48.442 # threads/core: 2 00:06:48.442 Run time: 1 seconds 00:06:48.442 Verify: Yes 00:06:48.442 00:06:48.442 Running for 1 seconds... 00:06:48.442 00:06:48.442 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.442 ------------------------------------------------------------------------------------ 00:06:48.442 0,1 2560/s 105 MiB/s 0 0 00:06:48.442 0,0 2528/s 104 MiB/s 0 0 00:06:48.442 ==================================================================================== 00:06:48.442 Total 5088/s 539 MiB/s 0 0' 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.442 22:05:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.442 22:05:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.442 22:05:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:48.442 22:05:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.442 22:05:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.442 22:05:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.442 22:05:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.442 22:05:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.442 22:05:43 -- accel/accel.sh@42 -- # jq -r . 00:06:48.442 [2024-07-24 22:05:43.300175] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:48.442 [2024-07-24 22:05:43.300251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393335 ] 00:06:48.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.442 [2024-07-24 22:05:43.354340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.442 [2024-07-24 22:05:43.390151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val= 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val= 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val= 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val=0x1 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val= 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val= 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val=decompress 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val= 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val=software 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val=32 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val=32 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val=2 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val=Yes 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val= 00:06:48.442 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.442 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.442 22:05:43 -- accel/accel.sh@21 -- # val= 00:06:48.443 22:05:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.443 22:05:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.443 22:05:43 -- accel/accel.sh@20 -- # read -r var val 00:06:49.822 22:05:44 -- accel/accel.sh@21 -- # val= 00:06:49.822 22:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.822 22:05:44 -- accel/accel.sh@21 -- # val= 00:06:49.822 22:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.822 22:05:44 -- accel/accel.sh@21 -- # val= 00:06:49.822 22:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.822 22:05:44 -- accel/accel.sh@21 -- # val= 00:06:49.822 22:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.822 22:05:44 -- accel/accel.sh@21 -- # val= 00:06:49.822 22:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.822 22:05:44 -- accel/accel.sh@21 -- # val= 00:06:49.822 22:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.822 22:05:44 -- accel/accel.sh@21 -- # val= 00:06:49.822 22:05:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.822 22:05:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.822 22:05:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.822 22:05:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:49.822 22:05:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.822 00:06:49.822 real 0m2.624s 00:06:49.822 user 0m2.413s 00:06:49.822 sys 0m0.219s 00:06:49.822 22:05:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.822 22:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:49.822 ************************************ 00:06:49.822 END TEST accel_deomp_full_mthread 00:06:49.822 ************************************ 00:06:49.822 22:05:44 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:49.822 22:05:44 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:49.822 22:05:44 -- accel/accel.sh@129 -- # build_accel_config 00:06:49.822 22:05:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:49.822 22:05:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.822 22:05:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.822 22:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:49.822 22:05:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.822 22:05:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.822 22:05:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.822 22:05:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.822 22:05:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.822 22:05:44 -- accel/accel.sh@42 -- # jq -r . 00:06:49.822 ************************************ 00:06:49.822 START TEST accel_dif_functional_tests 00:06:49.822 ************************************ 00:06:49.822 22:05:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:49.822 [2024-07-24 22:05:44.661516] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:49.822 [2024-07-24 22:05:44.661561] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393591 ] 00:06:49.822 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.822 [2024-07-24 22:05:44.713307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.822 [2024-07-24 22:05:44.751864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.822 [2024-07-24 22:05:44.751960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.822 [2024-07-24 22:05:44.751962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.822 00:06:49.822 00:06:49.822 CUnit - A unit testing framework for C - Version 2.1-3 00:06:49.822 http://cunit.sourceforge.net/ 00:06:49.822 00:06:49.822 00:06:49.822 Suite: accel_dif 00:06:49.822 Test: verify: DIF generated, GUARD check ...passed 00:06:49.822 Test: verify: DIF generated, APPTAG check ...passed 00:06:49.822 Test: verify: DIF generated, REFTAG check ...passed 00:06:49.822 Test: verify: DIF not generated, GUARD check ...[2024-07-24 22:05:44.815198] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:49.822 [2024-07-24 22:05:44.815240] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:49.822 passed 00:06:49.822 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 22:05:44.815269] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:49.822 [2024-07-24 22:05:44.815282] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:49.822 passed 00:06:49.822 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 22:05:44.815298] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:49.822 [2024-07-24 22:05:44.815311] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:49.822 passed 00:06:49.822 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:49.822 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 22:05:44.815347] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:49.822 passed 00:06:49.822 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:49.822 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:49.822 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:49.822 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 22:05:44.815444] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:49.822 passed 00:06:49.822 Test: generate copy: DIF generated, GUARD check ...passed 00:06:49.822 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:49.822 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:49.822 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:49.822 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:49.822 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:49.822 Test: generate copy: iovecs-len validate ...[2024-07-24 22:05:44.815598] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:49.822 passed 00:06:49.822 Test: generate copy: buffer alignment validate ...passed 00:06:49.822 00:06:49.822 Run Summary: Type Total Ran Passed Failed Inactive 00:06:49.822 suites 1 1 n/a 0 0 00:06:49.822 tests 20 20 20 0 0 00:06:49.822 asserts 204 204 204 0 n/a 00:06:49.822 00:06:49.822 Elapsed time = 0.000 seconds 00:06:50.122 00:06:50.122 real 0m0.354s 00:06:50.122 user 0m0.553s 00:06:50.122 sys 0m0.142s 00:06:50.122 22:05:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.122 22:05:44 -- common/autotest_common.sh@10 -- # set +x 00:06:50.122 ************************************ 00:06:50.122 END TEST accel_dif_functional_tests 00:06:50.122 ************************************ 00:06:50.122 00:06:50.122 real 0m54.632s 00:06:50.122 user 1m3.213s 00:06:50.122 sys 0m5.884s 00:06:50.122 22:05:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.122 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.122 ************************************ 00:06:50.122 END TEST accel 00:06:50.122 ************************************ 00:06:50.122 22:05:45 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:50.122 22:05:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.122 22:05:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.122 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.122 ************************************ 00:06:50.122 START TEST accel_rpc 00:06:50.122 ************************************ 00:06:50.122 22:05:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:50.122 * Looking for test storage... 00:06:50.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:50.122 22:05:45 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:50.122 22:05:45 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3393653 00:06:50.122 22:05:45 -- accel/accel_rpc.sh@15 -- # waitforlisten 3393653 00:06:50.122 22:05:45 -- common/autotest_common.sh@819 -- # '[' -z 3393653 ']' 00:06:50.122 22:05:45 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:50.122 22:05:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.122 22:05:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.122 22:05:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.122 22:05:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.122 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.122 [2024-07-24 22:05:45.163139] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:50.122 [2024-07-24 22:05:45.163189] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393653 ] 00:06:50.122 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.122 [2024-07-24 22:05:45.216550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.122 [2024-07-24 22:05:45.256039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.122 [2024-07-24 22:05:45.256157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.382 22:05:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:50.382 22:05:45 -- common/autotest_common.sh@852 -- # return 0 00:06:50.382 22:05:45 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:50.382 22:05:45 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:50.382 22:05:45 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:50.382 22:05:45 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:50.382 22:05:45 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:50.382 22:05:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.382 22:05:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.382 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.382 ************************************ 00:06:50.382 START TEST accel_assign_opcode 00:06:50.382 ************************************ 00:06:50.382 22:05:45 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:50.382 22:05:45 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:50.382 22:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:50.382 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.382 [2024-07-24 22:05:45.308535] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:50.382 22:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:50.382 22:05:45 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:50.382 22:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:50.382 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.382 [2024-07-24 22:05:45.316552] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:50.382 22:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:50.383 22:05:45 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:50.383 22:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:50.383 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.383 22:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:50.383 22:05:45 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:50.383 22:05:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:50.383 22:05:45 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:50.383 22:05:45 -- accel/accel_rpc.sh@42 -- # grep software 00:06:50.383 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.383 22:05:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:50.642 software 00:06:50.642 00:06:50.642 real 0m0.219s 00:06:50.642 user 0m0.042s 00:06:50.642 sys 0m0.008s 00:06:50.642 22:05:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.642 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.642 ************************************ 00:06:50.642 END TEST accel_assign_opcode 00:06:50.642 ************************************ 00:06:50.642 22:05:45 -- accel/accel_rpc.sh@55 -- # killprocess 3393653 00:06:50.642 22:05:45 -- common/autotest_common.sh@926 -- # '[' -z 3393653 ']' 00:06:50.642 22:05:45 -- common/autotest_common.sh@930 -- # kill -0 3393653 00:06:50.642 22:05:45 -- common/autotest_common.sh@931 -- # uname 00:06:50.642 22:05:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:50.642 22:05:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3393653 00:06:50.642 22:05:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:50.642 22:05:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:50.642 22:05:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3393653' 00:06:50.642 killing process with pid 3393653 00:06:50.642 22:05:45 -- common/autotest_common.sh@945 -- # kill 3393653 00:06:50.642 22:05:45 -- common/autotest_common.sh@950 -- # wait 3393653 00:06:50.902 00:06:50.902 real 0m0.846s 00:06:50.902 user 0m0.797s 00:06:50.902 sys 0m0.340s 00:06:50.902 22:05:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.902 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.902 ************************************ 00:06:50.902 END TEST accel_rpc 00:06:50.902 ************************************ 00:06:50.902 22:05:45 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:50.902 22:05:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.902 22:05:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.902 22:05:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.902 ************************************ 00:06:50.902 START TEST app_cmdline 00:06:50.902 ************************************ 00:06:50.902 22:05:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:50.902 * Looking for test storage... 00:06:50.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:50.902 22:05:46 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:50.902 22:05:46 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3393952 00:06:50.902 22:05:46 -- app/cmdline.sh@18 -- # waitforlisten 3393952 00:06:50.902 22:05:46 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:50.902 22:05:46 -- common/autotest_common.sh@819 -- # '[' -z 3393952 ']' 00:06:50.902 22:05:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.902 22:05:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.902 22:05:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.902 22:05:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.902 22:05:46 -- common/autotest_common.sh@10 -- # set +x 00:06:51.162 [2024-07-24 22:05:46.065359] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:06:51.162 [2024-07-24 22:05:46.065407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393952 ] 00:06:51.162 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.162 [2024-07-24 22:05:46.117961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.162 [2024-07-24 22:05:46.156515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.162 [2024-07-24 22:05:46.156639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.730 22:05:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.730 22:05:46 -- common/autotest_common.sh@852 -- # return 0 00:06:51.730 22:05:46 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:51.989 { 00:06:51.989 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:06:51.989 "fields": { 00:06:51.989 "major": 24, 00:06:51.989 "minor": 1, 00:06:51.989 "patch": 1, 00:06:51.989 "suffix": "-pre", 00:06:51.989 "commit": "dbef7efac" 00:06:51.989 } 00:06:51.989 } 00:06:51.989 22:05:47 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:51.989 22:05:47 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:51.989 22:05:47 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:51.989 22:05:47 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:51.989 22:05:47 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:51.989 22:05:47 -- app/cmdline.sh@26 -- # sort 00:06:51.989 22:05:47 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:51.989 22:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.989 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:51.989 22:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.989 22:05:47 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:51.989 22:05:47 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:51.989 22:05:47 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.989 22:05:47 -- common/autotest_common.sh@640 -- # local es=0 00:06:51.989 22:05:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.989 22:05:47 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.989 22:05:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.989 22:05:47 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.989 22:05:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.989 22:05:47 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.989 22:05:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.989 22:05:47 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.989 22:05:47 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:51.989 22:05:47 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.249 request: 00:06:52.249 { 00:06:52.249 "method": "env_dpdk_get_mem_stats", 00:06:52.249 "req_id": 1 00:06:52.249 } 00:06:52.249 Got JSON-RPC error response 00:06:52.249 response: 00:06:52.249 { 00:06:52.249 "code": -32601, 00:06:52.249 "message": "Method not found" 00:06:52.249 } 00:06:52.249 22:05:47 -- common/autotest_common.sh@643 -- # es=1 00:06:52.249 22:05:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:52.249 22:05:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:52.249 22:05:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:52.249 22:05:47 -- app/cmdline.sh@1 -- # killprocess 3393952 00:06:52.249 22:05:47 -- common/autotest_common.sh@926 -- # '[' -z 3393952 ']' 00:06:52.249 22:05:47 -- common/autotest_common.sh@930 -- # kill -0 3393952 00:06:52.249 22:05:47 -- common/autotest_common.sh@931 -- # uname 00:06:52.250 22:05:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.250 22:05:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3393952 00:06:52.250 22:05:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:52.250 22:05:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:52.250 22:05:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3393952' 00:06:52.250 killing process with pid 3393952 00:06:52.250 22:05:47 -- common/autotest_common.sh@945 -- # kill 3393952 00:06:52.250 22:05:47 -- common/autotest_common.sh@950 -- # wait 3393952 00:06:52.509 00:06:52.509 real 0m1.642s 00:06:52.509 user 0m1.988s 00:06:52.509 sys 0m0.394s 00:06:52.509 22:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.509 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.509 ************************************ 00:06:52.509 END TEST app_cmdline 00:06:52.509 ************************************ 00:06:52.509 22:05:47 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:52.509 22:05:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.509 22:05:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.509 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.509 ************************************ 00:06:52.509 START TEST version 00:06:52.509 ************************************ 00:06:52.509 22:05:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:52.769 * Looking for test storage... 00:06:52.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:52.769 22:05:47 -- app/version.sh@17 -- # get_header_version major 00:06:52.769 22:05:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:52.769 22:05:47 -- app/version.sh@14 -- # cut -f2 00:06:52.769 22:05:47 -- app/version.sh@14 -- # tr -d '"' 00:06:52.769 22:05:47 -- app/version.sh@17 -- # major=24 00:06:52.769 22:05:47 -- app/version.sh@18 -- # get_header_version minor 00:06:52.769 22:05:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:52.769 22:05:47 -- app/version.sh@14 -- # cut -f2 00:06:52.769 22:05:47 -- app/version.sh@14 -- # tr -d '"' 00:06:52.769 22:05:47 -- app/version.sh@18 -- # minor=1 00:06:52.769 22:05:47 -- app/version.sh@19 -- # get_header_version patch 00:06:52.769 22:05:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:52.769 22:05:47 -- app/version.sh@14 -- # cut -f2 00:06:52.769 22:05:47 -- app/version.sh@14 -- # tr -d '"' 00:06:52.769 22:05:47 -- app/version.sh@19 -- # patch=1 00:06:52.769 22:05:47 -- app/version.sh@20 -- # get_header_version suffix 00:06:52.769 22:05:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:52.769 22:05:47 -- app/version.sh@14 -- # cut -f2 00:06:52.769 22:05:47 -- app/version.sh@14 -- # tr -d '"' 00:06:52.769 22:05:47 -- app/version.sh@20 -- # suffix=-pre 00:06:52.769 22:05:47 -- app/version.sh@22 -- # version=24.1 00:06:52.769 22:05:47 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:52.769 22:05:47 -- app/version.sh@25 -- # version=24.1.1 00:06:52.769 22:05:47 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:52.769 22:05:47 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:52.769 22:05:47 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:52.769 22:05:47 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:52.769 22:05:47 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:52.769 00:06:52.769 real 0m0.141s 00:06:52.769 user 0m0.079s 00:06:52.769 sys 0m0.099s 00:06:52.769 22:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.769 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.769 ************************************ 00:06:52.769 END TEST version 00:06:52.769 ************************************ 00:06:52.769 22:05:47 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:52.769 22:05:47 -- spdk/autotest.sh@204 -- # uname -s 00:06:52.769 22:05:47 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:52.769 22:05:47 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:52.769 22:05:47 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:52.769 22:05:47 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:52.769 22:05:47 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:52.769 22:05:47 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:52.769 22:05:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:52.769 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.769 22:05:47 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:52.769 22:05:47 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:52.769 22:05:47 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:52.769 22:05:47 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:52.769 22:05:47 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:52.769 22:05:47 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:52.769 22:05:47 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:52.769 22:05:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:52.769 22:05:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.769 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.769 ************************************ 00:06:52.769 START TEST nvmf_tcp 00:06:52.769 ************************************ 00:06:52.769 22:05:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:53.029 * Looking for test storage... 00:06:53.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:53.029 22:05:47 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:53.029 22:05:47 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:53.029 22:05:47 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.029 22:05:47 -- nvmf/common.sh@7 -- # uname -s 00:06:53.029 22:05:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.029 22:05:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.030 22:05:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.030 22:05:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.030 22:05:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.030 22:05:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.030 22:05:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.030 22:05:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.030 22:05:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.030 22:05:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.030 22:05:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:53.030 22:05:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:53.030 22:05:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.030 22:05:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.030 22:05:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.030 22:05:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.030 22:05:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.030 22:05:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.030 22:05:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.030 22:05:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.030 22:05:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.030 22:05:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.030 22:05:47 -- paths/export.sh@5 -- # export PATH 00:06:53.030 22:05:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.030 22:05:47 -- nvmf/common.sh@46 -- # : 0 00:06:53.030 22:05:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:53.030 22:05:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:53.030 22:05:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:53.030 22:05:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.030 22:05:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.030 22:05:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:53.030 22:05:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:53.030 22:05:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:53.030 22:05:47 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:53.030 22:05:47 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:53.030 22:05:47 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:53.030 22:05:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:53.030 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:53.030 22:05:47 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:53.030 22:05:47 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:53.030 22:05:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:53.030 22:05:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.030 22:05:47 -- common/autotest_common.sh@10 -- # set +x 00:06:53.030 ************************************ 00:06:53.030 START TEST nvmf_example 00:06:53.030 ************************************ 00:06:53.030 22:05:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:53.030 * Looking for test storage... 00:06:53.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.030 22:05:48 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.030 22:05:48 -- nvmf/common.sh@7 -- # uname -s 00:06:53.030 22:05:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.030 22:05:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.030 22:05:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.030 22:05:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.030 22:05:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.030 22:05:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.030 22:05:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.030 22:05:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.030 22:05:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.030 22:05:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.030 22:05:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:53.030 22:05:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:53.030 22:05:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.030 22:05:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.030 22:05:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.030 22:05:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.030 22:05:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.030 22:05:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.030 22:05:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.030 22:05:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.030 22:05:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.030 22:05:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.030 22:05:48 -- paths/export.sh@5 -- # export PATH 00:06:53.030 22:05:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.030 22:05:48 -- nvmf/common.sh@46 -- # : 0 00:06:53.030 22:05:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:53.030 22:05:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:53.030 22:05:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:53.030 22:05:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.030 22:05:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.030 22:05:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:53.030 22:05:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:53.030 22:05:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:53.030 22:05:48 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:53.030 22:05:48 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:53.030 22:05:48 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:53.030 22:05:48 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:53.030 22:05:48 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:53.030 22:05:48 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:53.030 22:05:48 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:53.030 22:05:48 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:53.030 22:05:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:53.030 22:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:53.030 22:05:48 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:53.030 22:05:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:53.030 22:05:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.030 22:05:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:53.030 22:05:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:53.030 22:05:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:53.030 22:05:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.030 22:05:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.030 22:05:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.030 22:05:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:53.030 22:05:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:53.030 22:05:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:53.030 22:05:48 -- common/autotest_common.sh@10 -- # set +x 00:06:58.308 22:05:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:58.308 22:05:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:58.308 22:05:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:58.308 22:05:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:58.308 22:05:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:58.308 22:05:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:58.308 22:05:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:58.308 22:05:52 -- nvmf/common.sh@294 -- # net_devs=() 00:06:58.308 22:05:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:58.308 22:05:52 -- nvmf/common.sh@295 -- # e810=() 00:06:58.308 22:05:52 -- nvmf/common.sh@295 -- # local -ga e810 00:06:58.308 22:05:52 -- nvmf/common.sh@296 -- # x722=() 00:06:58.308 22:05:52 -- nvmf/common.sh@296 -- # local -ga x722 00:06:58.308 22:05:52 -- nvmf/common.sh@297 -- # mlx=() 00:06:58.308 22:05:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:58.308 22:05:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.308 22:05:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:58.308 22:05:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:58.308 22:05:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:58.309 22:05:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:58.309 22:05:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:58.309 22:05:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:58.309 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:58.309 22:05:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:58.309 22:05:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:58.309 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:58.309 22:05:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:58.309 22:05:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:58.309 22:05:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.309 22:05:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:58.309 22:05:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.309 22:05:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:58.309 Found net devices under 0000:86:00.0: cvl_0_0 00:06:58.309 22:05:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.309 22:05:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:58.309 22:05:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.309 22:05:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:58.309 22:05:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.309 22:05:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:58.309 Found net devices under 0000:86:00.1: cvl_0_1 00:06:58.309 22:05:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.309 22:05:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:58.309 22:05:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:58.309 22:05:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:58.309 22:05:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:58.309 22:05:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.309 22:05:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.309 22:05:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.309 22:05:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:58.309 22:05:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.309 22:05:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.309 22:05:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:58.309 22:05:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.309 22:05:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.309 22:05:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:58.309 22:05:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:58.309 22:05:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.309 22:05:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.309 22:05:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.309 22:05:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.309 22:05:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:58.309 22:05:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.309 22:05:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.309 22:05:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.309 22:05:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:58.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:06:58.309 00:06:58.309 --- 10.0.0.2 ping statistics --- 00:06:58.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.309 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:06:58.309 22:05:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:06:58.309 00:06:58.309 --- 10.0.0.1 ping statistics --- 00:06:58.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.309 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:06:58.309 22:05:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.309 22:05:53 -- nvmf/common.sh@410 -- # return 0 00:06:58.309 22:05:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:58.309 22:05:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.309 22:05:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:58.309 22:05:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:58.309 22:05:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.309 22:05:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:58.309 22:05:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:58.309 22:05:53 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:58.309 22:05:53 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:58.309 22:05:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:58.309 22:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:58.309 22:05:53 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:58.309 22:05:53 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:58.309 22:05:53 -- target/nvmf_example.sh@34 -- # nvmfpid=3397355 00:06:58.309 22:05:53 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:58.309 22:05:53 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:58.309 22:05:53 -- target/nvmf_example.sh@36 -- # waitforlisten 3397355 00:06:58.309 22:05:53 -- common/autotest_common.sh@819 -- # '[' -z 3397355 ']' 00:06:58.309 22:05:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.309 22:05:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:58.309 22:05:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.309 22:05:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:58.309 22:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:58.309 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.257 22:05:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:59.257 22:05:54 -- common/autotest_common.sh@852 -- # return 0 00:06:59.257 22:05:54 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:59.257 22:05:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:59.257 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 22:05:54 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.257 22:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.257 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 22:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.257 22:05:54 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:59.257 22:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.257 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 22:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.257 22:05:54 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:59.257 22:05:54 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:59.257 22:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.257 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 22:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.257 22:05:54 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:59.257 22:05:54 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:59.257 22:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.257 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 22:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.257 22:05:54 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.257 22:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.257 22:05:54 -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 22:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.257 22:05:54 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:59.257 22:05:54 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:59.257 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.471 Initializing NVMe Controllers 00:07:11.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:11.471 Initialization complete. Launching workers. 00:07:11.471 ======================================================== 00:07:11.471 Latency(us) 00:07:11.471 Device Information : IOPS MiB/s Average min max 00:07:11.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13526.70 52.84 4731.90 702.17 21604.46 00:07:11.471 ======================================================== 00:07:11.471 Total : 13526.70 52.84 4731.90 702.17 21604.46 00:07:11.471 00:07:11.471 22:06:04 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:11.471 22:06:04 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:11.471 22:06:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:11.471 22:06:04 -- nvmf/common.sh@116 -- # sync 00:07:11.471 22:06:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:11.471 22:06:04 -- nvmf/common.sh@119 -- # set +e 00:07:11.471 22:06:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:11.471 22:06:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:11.471 rmmod nvme_tcp 00:07:11.471 rmmod nvme_fabrics 00:07:11.471 rmmod nvme_keyring 00:07:11.471 22:06:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:11.471 22:06:04 -- nvmf/common.sh@123 -- # set -e 00:07:11.471 22:06:04 -- nvmf/common.sh@124 -- # return 0 00:07:11.471 22:06:04 -- nvmf/common.sh@477 -- # '[' -n 3397355 ']' 00:07:11.471 22:06:04 -- nvmf/common.sh@478 -- # killprocess 3397355 00:07:11.471 22:06:04 -- common/autotest_common.sh@926 -- # '[' -z 3397355 ']' 00:07:11.471 22:06:04 -- common/autotest_common.sh@930 -- # kill -0 3397355 00:07:11.471 22:06:04 -- common/autotest_common.sh@931 -- # uname 00:07:11.471 22:06:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:11.471 22:06:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3397355 00:07:11.471 22:06:04 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:11.471 22:06:04 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:11.471 22:06:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3397355' 00:07:11.471 killing process with pid 3397355 00:07:11.471 22:06:04 -- common/autotest_common.sh@945 -- # kill 3397355 00:07:11.471 22:06:04 -- common/autotest_common.sh@950 -- # wait 3397355 00:07:11.471 nvmf threads initialize successfully 00:07:11.471 bdev subsystem init successfully 00:07:11.471 created a nvmf target service 00:07:11.471 create targets's poll groups done 00:07:11.471 all subsystems of target started 00:07:11.471 nvmf target is running 00:07:11.471 all subsystems of target stopped 00:07:11.471 destroy targets's poll groups done 00:07:11.471 destroyed the nvmf target service 00:07:11.471 bdev subsystem finish successfully 00:07:11.471 nvmf threads destroy successfully 00:07:11.471 22:06:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:11.471 22:06:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:11.471 22:06:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:11.471 22:06:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.471 22:06:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:11.471 22:06:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.471 22:06:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.471 22:06:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.730 22:06:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:11.730 22:06:06 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:11.730 22:06:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:11.730 22:06:06 -- common/autotest_common.sh@10 -- # set +x 00:07:11.730 00:07:11.730 real 0m18.839s 00:07:11.730 user 0m45.739s 00:07:11.730 sys 0m5.136s 00:07:11.730 22:06:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.730 22:06:06 -- common/autotest_common.sh@10 -- # set +x 00:07:11.730 ************************************ 00:07:11.730 END TEST nvmf_example 00:07:11.730 ************************************ 00:07:11.730 22:06:06 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:11.730 22:06:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:11.730 22:06:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.730 22:06:06 -- common/autotest_common.sh@10 -- # set +x 00:07:11.730 ************************************ 00:07:11.730 START TEST nvmf_filesystem 00:07:11.731 ************************************ 00:07:11.731 22:06:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:11.992 * Looking for test storage... 00:07:11.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.992 22:06:06 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:11.992 22:06:06 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:11.992 22:06:06 -- common/autotest_common.sh@34 -- # set -e 00:07:11.992 22:06:06 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:11.992 22:06:06 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:11.992 22:06:06 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:11.992 22:06:06 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:11.992 22:06:06 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:11.992 22:06:06 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:11.992 22:06:06 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:11.992 22:06:06 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:11.992 22:06:06 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:11.992 22:06:06 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:11.992 22:06:06 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:11.992 22:06:06 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:11.992 22:06:06 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:11.992 22:06:06 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:11.992 22:06:06 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:11.992 22:06:06 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:11.992 22:06:06 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:11.992 22:06:06 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:11.992 22:06:06 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:11.992 22:06:06 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:11.992 22:06:06 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:11.992 22:06:06 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:11.992 22:06:06 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.992 22:06:06 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:11.992 22:06:06 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:11.992 22:06:06 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:11.992 22:06:06 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:11.992 22:06:06 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:11.992 22:06:06 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:11.992 22:06:06 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:11.992 22:06:06 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:11.992 22:06:06 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:11.992 22:06:06 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:11.992 22:06:06 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:11.992 22:06:06 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:11.992 22:06:06 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:11.992 22:06:06 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:11.992 22:06:06 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:11.992 22:06:06 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:11.992 22:06:06 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:11.992 22:06:06 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:11.992 22:06:06 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:11.992 22:06:06 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:11.992 22:06:06 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:11.992 22:06:06 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:11.992 22:06:06 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:11.992 22:06:06 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:11.992 22:06:06 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:11.992 22:06:06 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:11.992 22:06:06 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:11.993 22:06:06 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:11.993 22:06:06 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:11.993 22:06:06 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:11.993 22:06:06 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:11.993 22:06:06 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:11.993 22:06:06 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:11.993 22:06:06 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:11.993 22:06:06 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:11.993 22:06:06 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:11.993 22:06:06 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:11.993 22:06:06 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:11.993 22:06:06 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:11.993 22:06:06 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:11.993 22:06:06 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:11.993 22:06:06 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:11.993 22:06:06 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:11.993 22:06:06 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:11.993 22:06:06 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:11.993 22:06:06 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:11.993 22:06:06 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:11.993 22:06:06 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:11.993 22:06:06 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:11.993 22:06:06 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:11.993 22:06:06 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:11.993 22:06:06 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:11.993 22:06:06 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:11.993 22:06:06 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:11.993 22:06:06 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:11.993 22:06:06 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:11.993 22:06:06 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:11.993 22:06:06 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:11.993 22:06:06 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:11.993 22:06:06 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:11.993 22:06:06 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.993 22:06:06 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:11.993 22:06:06 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.993 22:06:06 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:11.993 22:06:06 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.993 22:06:06 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.993 22:06:06 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.993 22:06:06 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.993 22:06:06 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:11.993 22:06:06 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:11.993 22:06:06 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:11.993 22:06:06 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:11.993 22:06:06 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:11.993 22:06:06 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:11.993 22:06:06 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:11.993 22:06:06 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:11.993 #define SPDK_CONFIG_H 00:07:11.993 #define SPDK_CONFIG_APPS 1 00:07:11.993 #define SPDK_CONFIG_ARCH native 00:07:11.993 #undef SPDK_CONFIG_ASAN 00:07:11.993 #undef SPDK_CONFIG_AVAHI 00:07:11.993 #undef SPDK_CONFIG_CET 00:07:11.993 #define SPDK_CONFIG_COVERAGE 1 00:07:11.993 #define SPDK_CONFIG_CROSS_PREFIX 00:07:11.993 #undef SPDK_CONFIG_CRYPTO 00:07:11.993 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:11.993 #undef SPDK_CONFIG_CUSTOMOCF 00:07:11.993 #undef SPDK_CONFIG_DAOS 00:07:11.993 #define SPDK_CONFIG_DAOS_DIR 00:07:11.993 #define SPDK_CONFIG_DEBUG 1 00:07:11.993 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:11.993 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:11.993 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:11.993 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:11.993 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:11.993 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:11.993 #define SPDK_CONFIG_EXAMPLES 1 00:07:11.993 #undef SPDK_CONFIG_FC 00:07:11.993 #define SPDK_CONFIG_FC_PATH 00:07:11.993 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:11.993 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:11.993 #undef SPDK_CONFIG_FUSE 00:07:11.993 #undef SPDK_CONFIG_FUZZER 00:07:11.993 #define SPDK_CONFIG_FUZZER_LIB 00:07:11.993 #undef SPDK_CONFIG_GOLANG 00:07:11.993 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:11.993 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:11.993 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:11.993 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:11.993 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:11.993 #define SPDK_CONFIG_IDXD 1 00:07:11.993 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:11.993 #undef SPDK_CONFIG_IPSEC_MB 00:07:11.993 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:11.993 #define SPDK_CONFIG_ISAL 1 00:07:11.993 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:11.993 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:11.993 #define SPDK_CONFIG_LIBDIR 00:07:11.993 #undef SPDK_CONFIG_LTO 00:07:11.993 #define SPDK_CONFIG_MAX_LCORES 00:07:11.993 #define SPDK_CONFIG_NVME_CUSE 1 00:07:11.993 #undef SPDK_CONFIG_OCF 00:07:11.993 #define SPDK_CONFIG_OCF_PATH 00:07:11.993 #define SPDK_CONFIG_OPENSSL_PATH 00:07:11.993 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:11.993 #undef SPDK_CONFIG_PGO_USE 00:07:11.993 #define SPDK_CONFIG_PREFIX /usr/local 00:07:11.993 #undef SPDK_CONFIG_RAID5F 00:07:11.993 #undef SPDK_CONFIG_RBD 00:07:11.993 #define SPDK_CONFIG_RDMA 1 00:07:11.993 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:11.993 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:11.993 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:11.993 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:11.993 #define SPDK_CONFIG_SHARED 1 00:07:11.993 #undef SPDK_CONFIG_SMA 00:07:11.993 #define SPDK_CONFIG_TESTS 1 00:07:11.993 #undef SPDK_CONFIG_TSAN 00:07:11.993 #define SPDK_CONFIG_UBLK 1 00:07:11.993 #define SPDK_CONFIG_UBSAN 1 00:07:11.993 #undef SPDK_CONFIG_UNIT_TESTS 00:07:11.993 #undef SPDK_CONFIG_URING 00:07:11.993 #define SPDK_CONFIG_URING_PATH 00:07:11.993 #undef SPDK_CONFIG_URING_ZNS 00:07:11.993 #undef SPDK_CONFIG_USDT 00:07:11.993 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:11.993 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:11.993 #define SPDK_CONFIG_VFIO_USER 1 00:07:11.993 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:11.993 #define SPDK_CONFIG_VHOST 1 00:07:11.993 #define SPDK_CONFIG_VIRTIO 1 00:07:11.993 #undef SPDK_CONFIG_VTUNE 00:07:11.993 #define SPDK_CONFIG_VTUNE_DIR 00:07:11.993 #define SPDK_CONFIG_WERROR 1 00:07:11.993 #define SPDK_CONFIG_WPDK_DIR 00:07:11.993 #undef SPDK_CONFIG_XNVME 00:07:11.993 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:11.993 22:06:06 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:11.993 22:06:06 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.993 22:06:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.993 22:06:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.993 22:06:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.993 22:06:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.993 22:06:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.993 22:06:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.993 22:06:06 -- paths/export.sh@5 -- # export PATH 00:07:11.993 22:06:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.993 22:06:06 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.993 22:06:06 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:11.993 22:06:06 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.993 22:06:06 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:11.993 22:06:06 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:11.993 22:06:06 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:11.993 22:06:06 -- pm/common@16 -- # TEST_TAG=N/A 00:07:11.993 22:06:06 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:11.993 22:06:06 -- common/autotest_common.sh@52 -- # : 1 00:07:11.993 22:06:06 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:11.993 22:06:06 -- common/autotest_common.sh@56 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:11.993 22:06:06 -- common/autotest_common.sh@58 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:11.993 22:06:06 -- common/autotest_common.sh@60 -- # : 1 00:07:11.993 22:06:06 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:11.993 22:06:06 -- common/autotest_common.sh@62 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:11.993 22:06:06 -- common/autotest_common.sh@64 -- # : 00:07:11.993 22:06:06 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:11.993 22:06:06 -- common/autotest_common.sh@66 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:11.993 22:06:06 -- common/autotest_common.sh@68 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:11.993 22:06:06 -- common/autotest_common.sh@70 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:11.993 22:06:06 -- common/autotest_common.sh@72 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:11.993 22:06:06 -- common/autotest_common.sh@74 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:11.993 22:06:06 -- common/autotest_common.sh@76 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:11.993 22:06:06 -- common/autotest_common.sh@78 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:11.993 22:06:06 -- common/autotest_common.sh@80 -- # : 1 00:07:11.993 22:06:06 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:11.993 22:06:06 -- common/autotest_common.sh@82 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:11.993 22:06:06 -- common/autotest_common.sh@84 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:11.993 22:06:06 -- common/autotest_common.sh@86 -- # : 1 00:07:11.993 22:06:06 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:11.993 22:06:06 -- common/autotest_common.sh@88 -- # : 1 00:07:11.993 22:06:06 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:11.993 22:06:06 -- common/autotest_common.sh@90 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:11.993 22:06:06 -- common/autotest_common.sh@92 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:11.993 22:06:06 -- common/autotest_common.sh@94 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:11.993 22:06:06 -- common/autotest_common.sh@96 -- # : tcp 00:07:11.993 22:06:06 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:11.993 22:06:06 -- common/autotest_common.sh@98 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:11.993 22:06:06 -- common/autotest_common.sh@100 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:11.993 22:06:06 -- common/autotest_common.sh@102 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:11.993 22:06:06 -- common/autotest_common.sh@104 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:11.993 22:06:06 -- common/autotest_common.sh@106 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:11.993 22:06:06 -- common/autotest_common.sh@108 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:11.993 22:06:06 -- common/autotest_common.sh@110 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:11.993 22:06:06 -- common/autotest_common.sh@112 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:11.993 22:06:06 -- common/autotest_common.sh@114 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:11.993 22:06:06 -- common/autotest_common.sh@116 -- # : 1 00:07:11.993 22:06:06 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:11.993 22:06:06 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:11.993 22:06:06 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:11.993 22:06:06 -- common/autotest_common.sh@120 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:11.993 22:06:06 -- common/autotest_common.sh@122 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:11.993 22:06:06 -- common/autotest_common.sh@124 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:11.993 22:06:06 -- common/autotest_common.sh@126 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:11.993 22:06:06 -- common/autotest_common.sh@128 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:11.993 22:06:06 -- common/autotest_common.sh@130 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:11.993 22:06:06 -- common/autotest_common.sh@132 -- # : v22.11.4 00:07:11.993 22:06:06 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:11.993 22:06:06 -- common/autotest_common.sh@134 -- # : true 00:07:11.993 22:06:06 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:11.993 22:06:06 -- common/autotest_common.sh@136 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:11.993 22:06:06 -- common/autotest_common.sh@138 -- # : 0 00:07:11.993 22:06:06 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:11.994 22:06:06 -- common/autotest_common.sh@140 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:11.994 22:06:06 -- common/autotest_common.sh@142 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:11.994 22:06:06 -- common/autotest_common.sh@144 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:11.994 22:06:06 -- common/autotest_common.sh@146 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:11.994 22:06:06 -- common/autotest_common.sh@148 -- # : e810 00:07:11.994 22:06:06 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:11.994 22:06:06 -- common/autotest_common.sh@150 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:11.994 22:06:06 -- common/autotest_common.sh@152 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:11.994 22:06:06 -- common/autotest_common.sh@154 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:11.994 22:06:06 -- common/autotest_common.sh@156 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:11.994 22:06:06 -- common/autotest_common.sh@158 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:11.994 22:06:06 -- common/autotest_common.sh@160 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:11.994 22:06:06 -- common/autotest_common.sh@163 -- # : 00:07:11.994 22:06:06 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:11.994 22:06:06 -- common/autotest_common.sh@165 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:11.994 22:06:06 -- common/autotest_common.sh@167 -- # : 0 00:07:11.994 22:06:06 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:11.994 22:06:06 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.994 22:06:06 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:11.994 22:06:06 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:11.994 22:06:06 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:11.994 22:06:06 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.994 22:06:06 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.994 22:06:06 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.994 22:06:06 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:11.994 22:06:06 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.994 22:06:06 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:11.994 22:06:06 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.994 22:06:06 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.994 22:06:06 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:11.994 22:06:06 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:11.994 22:06:06 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.994 22:06:06 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:11.994 22:06:06 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.994 22:06:06 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:11.994 22:06:06 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:11.994 22:06:06 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:11.994 22:06:06 -- common/autotest_common.sh@196 -- # cat 00:07:11.994 22:06:06 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:11.994 22:06:06 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.994 22:06:06 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:11.994 22:06:06 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.994 22:06:06 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:11.994 22:06:06 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:11.994 22:06:06 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:11.994 22:06:06 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.994 22:06:06 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:11.994 22:06:06 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.994 22:06:06 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:11.994 22:06:06 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.994 22:06:06 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:11.994 22:06:06 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.994 22:06:06 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:11.994 22:06:06 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.994 22:06:06 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:11.994 22:06:06 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.994 22:06:06 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:11.994 22:06:06 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:11.994 22:06:06 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:11.994 22:06:06 -- common/autotest_common.sh@249 -- # valgrind= 00:07:11.994 22:06:06 -- common/autotest_common.sh@255 -- # uname -s 00:07:11.994 22:06:06 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:11.994 22:06:06 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:11.994 22:06:06 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:11.994 22:06:06 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:11.994 22:06:06 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:11.994 22:06:06 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:11.994 22:06:06 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:11.994 22:06:06 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j96 00:07:11.994 22:06:06 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:11.994 22:06:06 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:11.994 22:06:06 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:11.994 22:06:06 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:11.994 22:06:06 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:11.994 22:06:06 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:11.994 22:06:06 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:11.994 22:06:06 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:11.994 22:06:06 -- common/autotest_common.sh@309 -- # [[ -z 3399940 ]] 00:07:11.994 22:06:06 -- common/autotest_common.sh@309 -- # kill -0 3399940 00:07:11.994 22:06:06 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:11.994 22:06:06 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:11.994 22:06:06 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:11.994 22:06:06 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:11.994 22:06:06 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:11.994 22:06:06 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:11.994 22:06:06 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:11.994 22:06:06 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:11.994 22:06:07 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.T5Wo5Y 00:07:11.994 22:06:07 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:11.994 22:06:07 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:11.994 22:06:07 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:11.994 22:06:07 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.T5Wo5Y/tests/target /tmp/spdk.T5Wo5Y 00:07:11.994 22:06:07 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:11.994 22:06:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:11.994 22:06:07 -- common/autotest_common.sh@318 -- # df -T 00:07:11.994 22:06:07 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:11.994 22:06:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:11.994 22:06:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=950202368 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:11.994 22:06:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=4334227456 00:07:11.994 22:06:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=183933603840 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=195974283264 00:07:11.994 22:06:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=12040679424 00:07:11.994 22:06:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=97933623296 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987141632 00:07:11.994 22:06:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:07:11.994 22:06:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=39185473536 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=39194857472 00:07:11.994 22:06:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=9383936 00:07:11.994 22:06:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=97984507904 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987141632 00:07:11.994 22:06:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=2633728 00:07:11.994 22:06:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=19597422592 00:07:11.994 22:06:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19597426688 00:07:11.994 22:06:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:11.994 22:06:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:11.994 22:06:07 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:11.994 * Looking for test storage... 00:07:11.994 22:06:07 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:11.994 22:06:07 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:11.994 22:06:07 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.994 22:06:07 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:11.994 22:06:07 -- common/autotest_common.sh@363 -- # mount=/ 00:07:11.994 22:06:07 -- common/autotest_common.sh@365 -- # target_space=183933603840 00:07:11.994 22:06:07 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:11.994 22:06:07 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:11.994 22:06:07 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:11.994 22:06:07 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:11.994 22:06:07 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:11.994 22:06:07 -- common/autotest_common.sh@372 -- # new_size=14255271936 00:07:11.994 22:06:07 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:11.994 22:06:07 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.994 22:06:07 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.994 22:06:07 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:11.994 22:06:07 -- common/autotest_common.sh@380 -- # return 0 00:07:11.994 22:06:07 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:11.994 22:06:07 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:11.994 22:06:07 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:11.994 22:06:07 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:11.994 22:06:07 -- common/autotest_common.sh@1672 -- # true 00:07:11.994 22:06:07 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:11.994 22:06:07 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:11.994 22:06:07 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:11.994 22:06:07 -- common/autotest_common.sh@27 -- # exec 00:07:11.994 22:06:07 -- common/autotest_common.sh@29 -- # exec 00:07:11.994 22:06:07 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:11.994 22:06:07 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:11.994 22:06:07 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:11.994 22:06:07 -- common/autotest_common.sh@18 -- # set -x 00:07:11.994 22:06:07 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.994 22:06:07 -- nvmf/common.sh@7 -- # uname -s 00:07:11.994 22:06:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.994 22:06:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.994 22:06:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.994 22:06:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.994 22:06:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.994 22:06:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.994 22:06:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.994 22:06:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.994 22:06:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.994 22:06:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.994 22:06:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:11.994 22:06:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:11.994 22:06:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.994 22:06:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.994 22:06:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.994 22:06:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:11.994 22:06:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.994 22:06:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.994 22:06:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.994 22:06:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.994 22:06:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.994 22:06:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.994 22:06:07 -- paths/export.sh@5 -- # export PATH 00:07:11.995 22:06:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.995 22:06:07 -- nvmf/common.sh@46 -- # : 0 00:07:11.995 22:06:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:11.995 22:06:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:11.995 22:06:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:11.995 22:06:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.995 22:06:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.995 22:06:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:11.995 22:06:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:11.995 22:06:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:11.995 22:06:07 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:11.995 22:06:07 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:11.995 22:06:07 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:11.995 22:06:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:11.995 22:06:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.995 22:06:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:11.995 22:06:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:11.995 22:06:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:11.995 22:06:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.995 22:06:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.995 22:06:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.995 22:06:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:11.995 22:06:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:11.995 22:06:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:11.995 22:06:07 -- common/autotest_common.sh@10 -- # set +x 00:07:17.268 22:06:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:17.268 22:06:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:17.268 22:06:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:17.268 22:06:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:17.268 22:06:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:17.268 22:06:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:17.268 22:06:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:17.268 22:06:12 -- nvmf/common.sh@294 -- # net_devs=() 00:07:17.268 22:06:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:17.268 22:06:12 -- nvmf/common.sh@295 -- # e810=() 00:07:17.268 22:06:12 -- nvmf/common.sh@295 -- # local -ga e810 00:07:17.268 22:06:12 -- nvmf/common.sh@296 -- # x722=() 00:07:17.268 22:06:12 -- nvmf/common.sh@296 -- # local -ga x722 00:07:17.268 22:06:12 -- nvmf/common.sh@297 -- # mlx=() 00:07:17.268 22:06:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:17.268 22:06:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.268 22:06:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:17.268 22:06:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:17.268 22:06:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:17.268 22:06:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:17.268 22:06:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:17.268 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:17.268 22:06:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:17.268 22:06:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:17.268 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:17.268 22:06:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:17.268 22:06:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:17.268 22:06:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.268 22:06:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:17.268 22:06:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.268 22:06:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:17.268 Found net devices under 0000:86:00.0: cvl_0_0 00:07:17.268 22:06:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.268 22:06:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:17.268 22:06:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.268 22:06:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:17.268 22:06:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.268 22:06:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:17.268 Found net devices under 0000:86:00.1: cvl_0_1 00:07:17.268 22:06:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.268 22:06:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:17.268 22:06:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:17.268 22:06:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:17.268 22:06:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:17.268 22:06:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.268 22:06:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.268 22:06:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.268 22:06:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:17.268 22:06:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.268 22:06:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.268 22:06:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:17.268 22:06:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.268 22:06:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.268 22:06:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:17.268 22:06:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:17.268 22:06:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.268 22:06:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.268 22:06:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.268 22:06:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.268 22:06:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:17.268 22:06:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.268 22:06:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.268 22:06:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.528 22:06:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:17.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:07:17.528 00:07:17.528 --- 10.0.0.2 ping statistics --- 00:07:17.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.528 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:17.528 22:06:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:07:17.528 00:07:17.528 --- 10.0.0.1 ping statistics --- 00:07:17.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.528 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:07:17.528 22:06:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.528 22:06:12 -- nvmf/common.sh@410 -- # return 0 00:07:17.528 22:06:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:17.528 22:06:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.528 22:06:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:17.528 22:06:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:17.528 22:06:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.528 22:06:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:17.528 22:06:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:17.528 22:06:12 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:17.528 22:06:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:17.528 22:06:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.528 22:06:12 -- common/autotest_common.sh@10 -- # set +x 00:07:17.528 ************************************ 00:07:17.528 START TEST nvmf_filesystem_no_in_capsule 00:07:17.528 ************************************ 00:07:17.528 22:06:12 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:17.528 22:06:12 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:17.528 22:06:12 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:17.528 22:06:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:17.528 22:06:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:17.528 22:06:12 -- common/autotest_common.sh@10 -- # set +x 00:07:17.528 22:06:12 -- nvmf/common.sh@469 -- # nvmfpid=3403371 00:07:17.528 22:06:12 -- nvmf/common.sh@470 -- # waitforlisten 3403371 00:07:17.528 22:06:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.528 22:06:12 -- common/autotest_common.sh@819 -- # '[' -z 3403371 ']' 00:07:17.528 22:06:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.528 22:06:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.528 22:06:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.528 22:06:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.528 22:06:12 -- common/autotest_common.sh@10 -- # set +x 00:07:17.528 [2024-07-24 22:06:12.498858] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:17.528 [2024-07-24 22:06:12.498901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.528 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.529 [2024-07-24 22:06:12.556636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.529 [2024-07-24 22:06:12.596938] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.529 [2024-07-24 22:06:12.597057] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.529 [2024-07-24 22:06:12.597065] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.529 [2024-07-24 22:06:12.597073] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.529 [2024-07-24 22:06:12.597120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.529 [2024-07-24 22:06:12.597219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.529 [2024-07-24 22:06:12.597306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.529 [2024-07-24 22:06:12.597307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.465 22:06:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.465 22:06:13 -- common/autotest_common.sh@852 -- # return 0 00:07:18.465 22:06:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:18.465 22:06:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:18.465 22:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.465 22:06:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.465 22:06:13 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:18.465 22:06:13 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:18.465 22:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.465 22:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.465 [2024-07-24 22:06:13.350517] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.465 22:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.465 22:06:13 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:18.465 22:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.465 22:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.465 Malloc1 00:07:18.465 22:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.465 22:06:13 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.465 22:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.465 22:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.465 22:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.465 22:06:13 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.465 22:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.465 22:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.465 22:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.465 22:06:13 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.465 22:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.465 22:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.465 [2024-07-24 22:06:13.495863] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.465 22:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.465 22:06:13 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:18.465 22:06:13 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:18.465 22:06:13 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:18.465 22:06:13 -- common/autotest_common.sh@1359 -- # local bs 00:07:18.465 22:06:13 -- common/autotest_common.sh@1360 -- # local nb 00:07:18.465 22:06:13 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:18.465 22:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.465 22:06:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.465 22:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.465 22:06:13 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:18.465 { 00:07:18.465 "name": "Malloc1", 00:07:18.465 "aliases": [ 00:07:18.465 "c5fc8080-24c0-4371-9a7c-5baf77c2e12a" 00:07:18.465 ], 00:07:18.465 "product_name": "Malloc disk", 00:07:18.465 "block_size": 512, 00:07:18.465 "num_blocks": 1048576, 00:07:18.465 "uuid": "c5fc8080-24c0-4371-9a7c-5baf77c2e12a", 00:07:18.465 "assigned_rate_limits": { 00:07:18.465 "rw_ios_per_sec": 0, 00:07:18.465 "rw_mbytes_per_sec": 0, 00:07:18.465 "r_mbytes_per_sec": 0, 00:07:18.465 "w_mbytes_per_sec": 0 00:07:18.465 }, 00:07:18.465 "claimed": true, 00:07:18.466 "claim_type": "exclusive_write", 00:07:18.466 "zoned": false, 00:07:18.466 "supported_io_types": { 00:07:18.466 "read": true, 00:07:18.466 "write": true, 00:07:18.466 "unmap": true, 00:07:18.466 "write_zeroes": true, 00:07:18.466 "flush": true, 00:07:18.466 "reset": true, 00:07:18.466 "compare": false, 00:07:18.466 "compare_and_write": false, 00:07:18.466 "abort": true, 00:07:18.466 "nvme_admin": false, 00:07:18.466 "nvme_io": false 00:07:18.466 }, 00:07:18.466 "memory_domains": [ 00:07:18.466 { 00:07:18.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.466 "dma_device_type": 2 00:07:18.466 } 00:07:18.466 ], 00:07:18.466 "driver_specific": {} 00:07:18.466 } 00:07:18.466 ]' 00:07:18.466 22:06:13 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:18.466 22:06:13 -- common/autotest_common.sh@1362 -- # bs=512 00:07:18.466 22:06:13 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:18.724 22:06:13 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:18.724 22:06:13 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:18.724 22:06:13 -- common/autotest_common.sh@1367 -- # echo 512 00:07:18.724 22:06:13 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:18.724 22:06:13 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:19.738 22:06:14 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.738 22:06:14 -- common/autotest_common.sh@1177 -- # local i=0 00:07:19.738 22:06:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.738 22:06:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:19.738 22:06:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:21.643 22:06:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:21.643 22:06:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:21.643 22:06:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.643 22:06:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:21.643 22:06:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.643 22:06:16 -- common/autotest_common.sh@1187 -- # return 0 00:07:21.643 22:06:16 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:21.643 22:06:16 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:21.643 22:06:16 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:21.643 22:06:16 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:21.643 22:06:16 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:21.643 22:06:16 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:21.643 22:06:16 -- setup/common.sh@80 -- # echo 536870912 00:07:21.643 22:06:16 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:21.643 22:06:16 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:21.643 22:06:16 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:21.643 22:06:16 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:22.210 22:06:17 -- target/filesystem.sh@69 -- # partprobe 00:07:22.469 22:06:17 -- target/filesystem.sh@70 -- # sleep 1 00:07:23.405 22:06:18 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:23.405 22:06:18 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:23.405 22:06:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:23.405 22:06:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.405 22:06:18 -- common/autotest_common.sh@10 -- # set +x 00:07:23.663 ************************************ 00:07:23.663 START TEST filesystem_ext4 00:07:23.663 ************************************ 00:07:23.663 22:06:18 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:23.663 22:06:18 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:23.663 22:06:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.663 22:06:18 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:23.663 22:06:18 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:23.663 22:06:18 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:23.663 22:06:18 -- common/autotest_common.sh@904 -- # local i=0 00:07:23.663 22:06:18 -- common/autotest_common.sh@905 -- # local force 00:07:23.663 22:06:18 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:23.663 22:06:18 -- common/autotest_common.sh@908 -- # force=-F 00:07:23.663 22:06:18 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:23.663 mke2fs 1.46.5 (30-Dec-2021) 00:07:23.663 Discarding device blocks: 0/522240 done 00:07:23.663 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:23.663 Filesystem UUID: e984c80b-0e15-4ddc-875a-6ae647d590db 00:07:23.663 Superblock backups stored on blocks: 00:07:23.663 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:23.663 00:07:23.663 Allocating group tables: 0/64 done 00:07:23.663 Writing inode tables: 0/64 done 00:07:26.951 Creating journal (8192 blocks): done 00:07:27.469 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:27.469 00:07:27.469 22:06:22 -- common/autotest_common.sh@921 -- # return 0 00:07:27.469 22:06:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.407 22:06:23 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.407 22:06:23 -- target/filesystem.sh@25 -- # sync 00:07:28.407 22:06:23 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.407 22:06:23 -- target/filesystem.sh@27 -- # sync 00:07:28.407 22:06:23 -- target/filesystem.sh@29 -- # i=0 00:07:28.407 22:06:23 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.407 22:06:23 -- target/filesystem.sh@37 -- # kill -0 3403371 00:07:28.407 22:06:23 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.407 22:06:23 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.407 22:06:23 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.407 22:06:23 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.407 00:07:28.407 real 0m4.785s 00:07:28.407 user 0m0.027s 00:07:28.407 sys 0m0.045s 00:07:28.407 22:06:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.407 22:06:23 -- common/autotest_common.sh@10 -- # set +x 00:07:28.407 ************************************ 00:07:28.407 END TEST filesystem_ext4 00:07:28.407 ************************************ 00:07:28.407 22:06:23 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.407 22:06:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:28.407 22:06:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.407 22:06:23 -- common/autotest_common.sh@10 -- # set +x 00:07:28.407 ************************************ 00:07:28.407 START TEST filesystem_btrfs 00:07:28.407 ************************************ 00:07:28.407 22:06:23 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:28.407 22:06:23 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:28.407 22:06:23 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.407 22:06:23 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:28.407 22:06:23 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:28.407 22:06:23 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:28.407 22:06:23 -- common/autotest_common.sh@904 -- # local i=0 00:07:28.407 22:06:23 -- common/autotest_common.sh@905 -- # local force 00:07:28.407 22:06:23 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:28.407 22:06:23 -- common/autotest_common.sh@910 -- # force=-f 00:07:28.407 22:06:23 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:28.665 btrfs-progs v6.6.2 00:07:28.665 See https://btrfs.readthedocs.io for more information. 00:07:28.665 00:07:28.665 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:28.665 NOTE: several default settings have changed in version 5.15, please make sure 00:07:28.665 this does not affect your deployments: 00:07:28.665 - DUP for metadata (-m dup) 00:07:28.665 - enabled no-holes (-O no-holes) 00:07:28.665 - enabled free-space-tree (-R free-space-tree) 00:07:28.665 00:07:28.665 Label: (null) 00:07:28.665 UUID: 921a344f-1541-4e17-a3a1-156c3ed5dfde 00:07:28.665 Node size: 16384 00:07:28.665 Sector size: 4096 00:07:28.665 Filesystem size: 510.00MiB 00:07:28.665 Block group profiles: 00:07:28.665 Data: single 8.00MiB 00:07:28.665 Metadata: DUP 32.00MiB 00:07:28.665 System: DUP 8.00MiB 00:07:28.665 SSD detected: yes 00:07:28.665 Zoned device: no 00:07:28.665 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:28.665 Runtime features: free-space-tree 00:07:28.665 Checksum: crc32c 00:07:28.665 Number of devices: 1 00:07:28.665 Devices: 00:07:28.665 ID SIZE PATH 00:07:28.665 1 510.00MiB /dev/nvme0n1p1 00:07:28.665 00:07:28.665 22:06:23 -- common/autotest_common.sh@921 -- # return 0 00:07:28.665 22:06:23 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.600 22:06:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.600 22:06:24 -- target/filesystem.sh@25 -- # sync 00:07:29.600 22:06:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.600 22:06:24 -- target/filesystem.sh@27 -- # sync 00:07:29.600 22:06:24 -- target/filesystem.sh@29 -- # i=0 00:07:29.600 22:06:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.600 22:06:24 -- target/filesystem.sh@37 -- # kill -0 3403371 00:07:29.600 22:06:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.600 22:06:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.859 22:06:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.859 22:06:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.859 00:07:29.859 real 0m1.380s 00:07:29.859 user 0m0.021s 00:07:29.859 sys 0m0.063s 00:07:29.859 22:06:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.859 22:06:24 -- common/autotest_common.sh@10 -- # set +x 00:07:29.859 ************************************ 00:07:29.859 END TEST filesystem_btrfs 00:07:29.859 ************************************ 00:07:29.859 22:06:24 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:29.859 22:06:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:29.859 22:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.859 22:06:24 -- common/autotest_common.sh@10 -- # set +x 00:07:29.859 ************************************ 00:07:29.859 START TEST filesystem_xfs 00:07:29.859 ************************************ 00:07:29.859 22:06:24 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:29.859 22:06:24 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:29.859 22:06:24 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.859 22:06:24 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:29.859 22:06:24 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:29.859 22:06:24 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:29.859 22:06:24 -- common/autotest_common.sh@904 -- # local i=0 00:07:29.859 22:06:24 -- common/autotest_common.sh@905 -- # local force 00:07:29.859 22:06:24 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:29.859 22:06:24 -- common/autotest_common.sh@910 -- # force=-f 00:07:29.859 22:06:24 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:29.859 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:29.859 = sectsz=512 attr=2, projid32bit=1 00:07:29.859 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:29.859 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:29.859 data = bsize=4096 blocks=130560, imaxpct=25 00:07:29.859 = sunit=0 swidth=0 blks 00:07:29.859 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:29.859 log =internal log bsize=4096 blocks=16384, version=2 00:07:29.859 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:29.859 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.795 Discarding blocks...Done. 00:07:30.795 22:06:25 -- common/autotest_common.sh@921 -- # return 0 00:07:30.795 22:06:25 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.326 22:06:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.326 22:06:28 -- target/filesystem.sh@25 -- # sync 00:07:33.326 22:06:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.326 22:06:28 -- target/filesystem.sh@27 -- # sync 00:07:33.326 22:06:28 -- target/filesystem.sh@29 -- # i=0 00:07:33.326 22:06:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.326 22:06:28 -- target/filesystem.sh@37 -- # kill -0 3403371 00:07:33.326 22:06:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.326 22:06:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.326 22:06:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.326 22:06:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.326 00:07:33.326 real 0m3.390s 00:07:33.326 user 0m0.019s 00:07:33.326 sys 0m0.054s 00:07:33.326 22:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.326 22:06:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.327 ************************************ 00:07:33.327 END TEST filesystem_xfs 00:07:33.327 ************************************ 00:07:33.327 22:06:28 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:33.585 22:06:28 -- target/filesystem.sh@93 -- # sync 00:07:33.585 22:06:28 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:33.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.585 22:06:28 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:33.585 22:06:28 -- common/autotest_common.sh@1198 -- # local i=0 00:07:33.585 22:06:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:33.585 22:06:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.585 22:06:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:33.585 22:06:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.585 22:06:28 -- common/autotest_common.sh@1210 -- # return 0 00:07:33.585 22:06:28 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.585 22:06:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:33.585 22:06:28 -- common/autotest_common.sh@10 -- # set +x 00:07:33.585 22:06:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:33.585 22:06:28 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:33.585 22:06:28 -- target/filesystem.sh@101 -- # killprocess 3403371 00:07:33.585 22:06:28 -- common/autotest_common.sh@926 -- # '[' -z 3403371 ']' 00:07:33.585 22:06:28 -- common/autotest_common.sh@930 -- # kill -0 3403371 00:07:33.585 22:06:28 -- common/autotest_common.sh@931 -- # uname 00:07:33.585 22:06:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:33.585 22:06:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3403371 00:07:33.585 22:06:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:33.585 22:06:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:33.585 22:06:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3403371' 00:07:33.585 killing process with pid 3403371 00:07:33.585 22:06:28 -- common/autotest_common.sh@945 -- # kill 3403371 00:07:33.585 22:06:28 -- common/autotest_common.sh@950 -- # wait 3403371 00:07:34.153 22:06:29 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:34.153 00:07:34.153 real 0m16.562s 00:07:34.153 user 1m5.324s 00:07:34.153 sys 0m1.122s 00:07:34.153 22:06:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.153 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:07:34.153 ************************************ 00:07:34.153 END TEST nvmf_filesystem_no_in_capsule 00:07:34.153 ************************************ 00:07:34.153 22:06:29 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:34.154 22:06:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:34.154 22:06:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.154 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:07:34.154 ************************************ 00:07:34.154 START TEST nvmf_filesystem_in_capsule 00:07:34.154 ************************************ 00:07:34.154 22:06:29 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:34.154 22:06:29 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:34.154 22:06:29 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:34.154 22:06:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:34.154 22:06:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:34.154 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:07:34.154 22:06:29 -- nvmf/common.sh@469 -- # nvmfpid=3406401 00:07:34.154 22:06:29 -- nvmf/common.sh@470 -- # waitforlisten 3406401 00:07:34.154 22:06:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.154 22:06:29 -- common/autotest_common.sh@819 -- # '[' -z 3406401 ']' 00:07:34.154 22:06:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.154 22:06:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:34.154 22:06:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.154 22:06:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:34.154 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:07:34.154 [2024-07-24 22:06:29.109526] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:34.154 [2024-07-24 22:06:29.109577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.154 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.154 [2024-07-24 22:06:29.166899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.154 [2024-07-24 22:06:29.202395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:34.154 [2024-07-24 22:06:29.202507] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.154 [2024-07-24 22:06:29.202515] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.154 [2024-07-24 22:06:29.202522] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.154 [2024-07-24 22:06:29.202615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.154 [2024-07-24 22:06:29.202715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.154 [2024-07-24 22:06:29.202802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:34.154 [2024-07-24 22:06:29.202802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.089 22:06:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.089 22:06:29 -- common/autotest_common.sh@852 -- # return 0 00:07:35.089 22:06:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:35.089 22:06:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:35.090 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:07:35.090 22:06:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.090 22:06:29 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:35.090 22:06:29 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:35.090 22:06:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.090 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:07:35.090 [2024-07-24 22:06:29.941452] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.090 22:06:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.090 22:06:29 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:35.090 22:06:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.090 22:06:29 -- common/autotest_common.sh@10 -- # set +x 00:07:35.090 Malloc1 00:07:35.090 22:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.090 22:06:30 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:35.090 22:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.090 22:06:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.090 22:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.090 22:06:30 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.090 22:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.090 22:06:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.090 22:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.090 22:06:30 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.090 22:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.090 22:06:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.090 [2024-07-24 22:06:30.094724] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.090 22:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.090 22:06:30 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:35.090 22:06:30 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:35.090 22:06:30 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:35.090 22:06:30 -- common/autotest_common.sh@1359 -- # local bs 00:07:35.090 22:06:30 -- common/autotest_common.sh@1360 -- # local nb 00:07:35.090 22:06:30 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:35.090 22:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.090 22:06:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.090 22:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.090 22:06:30 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:35.090 { 00:07:35.090 "name": "Malloc1", 00:07:35.090 "aliases": [ 00:07:35.090 "1a5163a7-48ce-4718-b16c-dd1adb808dc0" 00:07:35.090 ], 00:07:35.090 "product_name": "Malloc disk", 00:07:35.090 "block_size": 512, 00:07:35.090 "num_blocks": 1048576, 00:07:35.090 "uuid": "1a5163a7-48ce-4718-b16c-dd1adb808dc0", 00:07:35.090 "assigned_rate_limits": { 00:07:35.090 "rw_ios_per_sec": 0, 00:07:35.090 "rw_mbytes_per_sec": 0, 00:07:35.090 "r_mbytes_per_sec": 0, 00:07:35.090 "w_mbytes_per_sec": 0 00:07:35.090 }, 00:07:35.090 "claimed": true, 00:07:35.090 "claim_type": "exclusive_write", 00:07:35.090 "zoned": false, 00:07:35.090 "supported_io_types": { 00:07:35.090 "read": true, 00:07:35.090 "write": true, 00:07:35.090 "unmap": true, 00:07:35.090 "write_zeroes": true, 00:07:35.090 "flush": true, 00:07:35.090 "reset": true, 00:07:35.090 "compare": false, 00:07:35.090 "compare_and_write": false, 00:07:35.090 "abort": true, 00:07:35.090 "nvme_admin": false, 00:07:35.090 "nvme_io": false 00:07:35.090 }, 00:07:35.090 "memory_domains": [ 00:07:35.090 { 00:07:35.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.090 "dma_device_type": 2 00:07:35.090 } 00:07:35.090 ], 00:07:35.090 "driver_specific": {} 00:07:35.090 } 00:07:35.090 ]' 00:07:35.090 22:06:30 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:35.090 22:06:30 -- common/autotest_common.sh@1362 -- # bs=512 00:07:35.090 22:06:30 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:35.090 22:06:30 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:35.090 22:06:30 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:35.090 22:06:30 -- common/autotest_common.sh@1367 -- # echo 512 00:07:35.090 22:06:30 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:35.090 22:06:30 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:36.467 22:06:31 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:36.467 22:06:31 -- common/autotest_common.sh@1177 -- # local i=0 00:07:36.467 22:06:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:36.467 22:06:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:36.467 22:06:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:38.423 22:06:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:38.423 22:06:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:38.423 22:06:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:38.423 22:06:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:38.423 22:06:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:38.423 22:06:33 -- common/autotest_common.sh@1187 -- # return 0 00:07:38.423 22:06:33 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:38.423 22:06:33 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:38.423 22:06:33 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:38.423 22:06:33 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:38.423 22:06:33 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:38.423 22:06:33 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:38.423 22:06:33 -- setup/common.sh@80 -- # echo 536870912 00:07:38.423 22:06:33 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:38.423 22:06:33 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:38.423 22:06:33 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:38.423 22:06:33 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:38.992 22:06:33 -- target/filesystem.sh@69 -- # partprobe 00:07:39.250 22:06:34 -- target/filesystem.sh@70 -- # sleep 1 00:07:40.623 22:06:35 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:40.623 22:06:35 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:40.623 22:06:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:40.623 22:06:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.623 22:06:35 -- common/autotest_common.sh@10 -- # set +x 00:07:40.623 ************************************ 00:07:40.623 START TEST filesystem_in_capsule_ext4 00:07:40.623 ************************************ 00:07:40.623 22:06:35 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:40.623 22:06:35 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:40.623 22:06:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:40.623 22:06:35 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:40.623 22:06:35 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:40.623 22:06:35 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:40.623 22:06:35 -- common/autotest_common.sh@904 -- # local i=0 00:07:40.623 22:06:35 -- common/autotest_common.sh@905 -- # local force 00:07:40.623 22:06:35 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:40.623 22:06:35 -- common/autotest_common.sh@908 -- # force=-F 00:07:40.623 22:06:35 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:40.623 mke2fs 1.46.5 (30-Dec-2021) 00:07:40.623 Discarding device blocks: 0/522240 done 00:07:40.623 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:40.623 Filesystem UUID: fdab33a0-4096-427f-a8c8-a42abdd65446 00:07:40.623 Superblock backups stored on blocks: 00:07:40.623 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:40.623 00:07:40.623 Allocating group tables: 0/64 done 00:07:40.623 Writing inode tables: 0/64 done 00:07:40.623 Creating journal (8192 blocks): done 00:07:40.623 Writing superblocks and filesystem accounting information: 0/64 done 00:07:40.623 00:07:40.623 22:06:35 -- common/autotest_common.sh@921 -- # return 0 00:07:40.623 22:06:35 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.559 22:06:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.559 22:06:36 -- target/filesystem.sh@25 -- # sync 00:07:41.559 22:06:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.559 22:06:36 -- target/filesystem.sh@27 -- # sync 00:07:41.559 22:06:36 -- target/filesystem.sh@29 -- # i=0 00:07:41.559 22:06:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.559 22:06:36 -- target/filesystem.sh@37 -- # kill -0 3406401 00:07:41.559 22:06:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.559 22:06:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.559 22:06:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.559 22:06:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.559 00:07:41.559 real 0m1.167s 00:07:41.559 user 0m0.024s 00:07:41.559 sys 0m0.042s 00:07:41.559 22:06:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.559 22:06:36 -- common/autotest_common.sh@10 -- # set +x 00:07:41.559 ************************************ 00:07:41.559 END TEST filesystem_in_capsule_ext4 00:07:41.559 ************************************ 00:07:41.559 22:06:36 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:41.559 22:06:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:41.559 22:06:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.559 22:06:36 -- common/autotest_common.sh@10 -- # set +x 00:07:41.559 ************************************ 00:07:41.559 START TEST filesystem_in_capsule_btrfs 00:07:41.559 ************************************ 00:07:41.559 22:06:36 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:41.559 22:06:36 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:41.559 22:06:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:41.559 22:06:36 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:41.559 22:06:36 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:41.559 22:06:36 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:41.559 22:06:36 -- common/autotest_common.sh@904 -- # local i=0 00:07:41.559 22:06:36 -- common/autotest_common.sh@905 -- # local force 00:07:41.559 22:06:36 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:41.559 22:06:36 -- common/autotest_common.sh@910 -- # force=-f 00:07:41.559 22:06:36 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:41.817 btrfs-progs v6.6.2 00:07:41.817 See https://btrfs.readthedocs.io for more information. 00:07:41.817 00:07:41.817 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:41.817 NOTE: several default settings have changed in version 5.15, please make sure 00:07:41.817 this does not affect your deployments: 00:07:41.817 - DUP for metadata (-m dup) 00:07:41.817 - enabled no-holes (-O no-holes) 00:07:41.817 - enabled free-space-tree (-R free-space-tree) 00:07:41.817 00:07:41.817 Label: (null) 00:07:41.817 UUID: 94dc0406-697d-417d-bff4-3e328cb603f6 00:07:41.817 Node size: 16384 00:07:41.817 Sector size: 4096 00:07:41.817 Filesystem size: 510.00MiB 00:07:41.817 Block group profiles: 00:07:41.817 Data: single 8.00MiB 00:07:41.817 Metadata: DUP 32.00MiB 00:07:41.817 System: DUP 8.00MiB 00:07:41.817 SSD detected: yes 00:07:41.817 Zoned device: no 00:07:41.817 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:41.817 Runtime features: free-space-tree 00:07:41.817 Checksum: crc32c 00:07:41.817 Number of devices: 1 00:07:41.817 Devices: 00:07:41.817 ID SIZE PATH 00:07:41.817 1 510.00MiB /dev/nvme0n1p1 00:07:41.817 00:07:41.817 22:06:36 -- common/autotest_common.sh@921 -- # return 0 00:07:41.817 22:06:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.383 22:06:37 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.383 22:06:37 -- target/filesystem.sh@25 -- # sync 00:07:42.383 22:06:37 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.383 22:06:37 -- target/filesystem.sh@27 -- # sync 00:07:42.383 22:06:37 -- target/filesystem.sh@29 -- # i=0 00:07:42.383 22:06:37 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.383 22:06:37 -- target/filesystem.sh@37 -- # kill -0 3406401 00:07:42.383 22:06:37 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.383 22:06:37 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.640 22:06:37 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.640 22:06:37 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.640 00:07:42.640 real 0m0.981s 00:07:42.640 user 0m0.026s 00:07:42.640 sys 0m0.055s 00:07:42.640 22:06:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.640 22:06:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.640 ************************************ 00:07:42.640 END TEST filesystem_in_capsule_btrfs 00:07:42.640 ************************************ 00:07:42.640 22:06:37 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:42.640 22:06:37 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:42.640 22:06:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.640 22:06:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.640 ************************************ 00:07:42.640 START TEST filesystem_in_capsule_xfs 00:07:42.640 ************************************ 00:07:42.640 22:06:37 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:42.640 22:06:37 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:42.640 22:06:37 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.640 22:06:37 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:42.640 22:06:37 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:42.640 22:06:37 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:42.640 22:06:37 -- common/autotest_common.sh@904 -- # local i=0 00:07:42.640 22:06:37 -- common/autotest_common.sh@905 -- # local force 00:07:42.640 22:06:37 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:42.640 22:06:37 -- common/autotest_common.sh@910 -- # force=-f 00:07:42.640 22:06:37 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:42.640 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:42.640 = sectsz=512 attr=2, projid32bit=1 00:07:42.640 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:42.640 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:42.640 data = bsize=4096 blocks=130560, imaxpct=25 00:07:42.640 = sunit=0 swidth=0 blks 00:07:42.640 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:42.640 log =internal log bsize=4096 blocks=16384, version=2 00:07:42.640 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:42.640 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:43.573 Discarding blocks...Done. 00:07:43.573 22:06:38 -- common/autotest_common.sh@921 -- # return 0 00:07:43.573 22:06:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:45.476 22:06:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:45.476 22:06:40 -- target/filesystem.sh@25 -- # sync 00:07:45.476 22:06:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:45.476 22:06:40 -- target/filesystem.sh@27 -- # sync 00:07:45.476 22:06:40 -- target/filesystem.sh@29 -- # i=0 00:07:45.476 22:06:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:45.476 22:06:40 -- target/filesystem.sh@37 -- # kill -0 3406401 00:07:45.476 22:06:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:45.476 22:06:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:45.476 22:06:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:45.476 22:06:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:45.476 00:07:45.476 real 0m2.860s 00:07:45.476 user 0m0.023s 00:07:45.476 sys 0m0.049s 00:07:45.476 22:06:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.476 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:07:45.476 ************************************ 00:07:45.476 END TEST filesystem_in_capsule_xfs 00:07:45.476 ************************************ 00:07:45.476 22:06:40 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:45.476 22:06:40 -- target/filesystem.sh@93 -- # sync 00:07:45.476 22:06:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:45.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.476 22:06:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:45.476 22:06:40 -- common/autotest_common.sh@1198 -- # local i=0 00:07:45.476 22:06:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:45.476 22:06:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.476 22:06:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:45.476 22:06:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:45.735 22:06:40 -- common/autotest_common.sh@1210 -- # return 0 00:07:45.735 22:06:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:45.735 22:06:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:45.735 22:06:40 -- common/autotest_common.sh@10 -- # set +x 00:07:45.735 22:06:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:45.735 22:06:40 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:45.735 22:06:40 -- target/filesystem.sh@101 -- # killprocess 3406401 00:07:45.735 22:06:40 -- common/autotest_common.sh@926 -- # '[' -z 3406401 ']' 00:07:45.735 22:06:40 -- common/autotest_common.sh@930 -- # kill -0 3406401 00:07:45.735 22:06:40 -- common/autotest_common.sh@931 -- # uname 00:07:45.735 22:06:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:45.735 22:06:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3406401 00:07:45.735 22:06:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:45.735 22:06:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:45.735 22:06:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3406401' 00:07:45.735 killing process with pid 3406401 00:07:45.735 22:06:40 -- common/autotest_common.sh@945 -- # kill 3406401 00:07:45.735 22:06:40 -- common/autotest_common.sh@950 -- # wait 3406401 00:07:45.994 22:06:41 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:45.994 00:07:45.994 real 0m11.958s 00:07:45.994 user 0m47.042s 00:07:45.994 sys 0m1.017s 00:07:45.994 22:06:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.994 22:06:41 -- common/autotest_common.sh@10 -- # set +x 00:07:45.994 ************************************ 00:07:45.994 END TEST nvmf_filesystem_in_capsule 00:07:45.994 ************************************ 00:07:45.994 22:06:41 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:45.994 22:06:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:45.994 22:06:41 -- nvmf/common.sh@116 -- # sync 00:07:45.994 22:06:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:45.994 22:06:41 -- nvmf/common.sh@119 -- # set +e 00:07:45.994 22:06:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:45.994 22:06:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:45.994 rmmod nvme_tcp 00:07:45.994 rmmod nvme_fabrics 00:07:45.994 rmmod nvme_keyring 00:07:45.994 22:06:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:45.994 22:06:41 -- nvmf/common.sh@123 -- # set -e 00:07:45.994 22:06:41 -- nvmf/common.sh@124 -- # return 0 00:07:45.994 22:06:41 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:45.994 22:06:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:45.994 22:06:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:45.994 22:06:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:45.994 22:06:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.994 22:06:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:45.994 22:06:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.994 22:06:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.994 22:06:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.532 22:06:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:48.532 00:07:48.532 real 0m36.326s 00:07:48.532 user 1m53.922s 00:07:48.532 sys 0m6.376s 00:07:48.532 22:06:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.532 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:07:48.532 ************************************ 00:07:48.532 END TEST nvmf_filesystem 00:07:48.532 ************************************ 00:07:48.532 22:06:43 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:48.532 22:06:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:48.532 22:06:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.532 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:07:48.532 ************************************ 00:07:48.532 START TEST nvmf_discovery 00:07:48.532 ************************************ 00:07:48.532 22:06:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:48.532 * Looking for test storage... 00:07:48.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.532 22:06:43 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.532 22:06:43 -- nvmf/common.sh@7 -- # uname -s 00:07:48.533 22:06:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.533 22:06:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.533 22:06:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.533 22:06:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.533 22:06:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.533 22:06:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.533 22:06:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.533 22:06:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.533 22:06:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.533 22:06:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.533 22:06:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:48.533 22:06:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:48.533 22:06:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.533 22:06:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.533 22:06:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.533 22:06:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.533 22:06:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.533 22:06:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.533 22:06:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.533 22:06:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.533 22:06:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.533 22:06:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.533 22:06:43 -- paths/export.sh@5 -- # export PATH 00:07:48.533 22:06:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.533 22:06:43 -- nvmf/common.sh@46 -- # : 0 00:07:48.533 22:06:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:48.533 22:06:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:48.533 22:06:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:48.533 22:06:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.533 22:06:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.533 22:06:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:48.533 22:06:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:48.533 22:06:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:48.533 22:06:43 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:48.533 22:06:43 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:48.533 22:06:43 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:48.533 22:06:43 -- target/discovery.sh@15 -- # hash nvme 00:07:48.533 22:06:43 -- target/discovery.sh@20 -- # nvmftestinit 00:07:48.533 22:06:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:48.533 22:06:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.533 22:06:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:48.533 22:06:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:48.533 22:06:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:48.533 22:06:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.533 22:06:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.533 22:06:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.533 22:06:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:48.533 22:06:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:48.533 22:06:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:48.533 22:06:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.805 22:06:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:53.805 22:06:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:53.805 22:06:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:53.805 22:06:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:53.805 22:06:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:53.805 22:06:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:53.805 22:06:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:53.805 22:06:48 -- nvmf/common.sh@294 -- # net_devs=() 00:07:53.805 22:06:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:53.805 22:06:48 -- nvmf/common.sh@295 -- # e810=() 00:07:53.805 22:06:48 -- nvmf/common.sh@295 -- # local -ga e810 00:07:53.805 22:06:48 -- nvmf/common.sh@296 -- # x722=() 00:07:53.805 22:06:48 -- nvmf/common.sh@296 -- # local -ga x722 00:07:53.805 22:06:48 -- nvmf/common.sh@297 -- # mlx=() 00:07:53.805 22:06:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:53.805 22:06:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.805 22:06:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:53.805 22:06:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:53.805 22:06:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:53.805 22:06:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:53.805 22:06:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:53.805 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:53.805 22:06:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:53.805 22:06:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:53.805 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:53.805 22:06:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:53.805 22:06:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:53.805 22:06:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.805 22:06:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:53.805 22:06:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.805 22:06:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:53.805 Found net devices under 0000:86:00.0: cvl_0_0 00:07:53.805 22:06:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.805 22:06:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:53.805 22:06:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.805 22:06:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:53.805 22:06:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.805 22:06:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:53.805 Found net devices under 0000:86:00.1: cvl_0_1 00:07:53.805 22:06:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.805 22:06:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:53.805 22:06:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:53.805 22:06:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:53.805 22:06:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.805 22:06:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.805 22:06:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.805 22:06:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:53.805 22:06:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.805 22:06:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.805 22:06:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:53.805 22:06:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.805 22:06:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.805 22:06:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:53.805 22:06:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:53.805 22:06:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.805 22:06:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.805 22:06:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.805 22:06:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.805 22:06:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:53.805 22:06:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.805 22:06:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.805 22:06:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.805 22:06:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:53.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:07:53.805 00:07:53.805 --- 10.0.0.2 ping statistics --- 00:07:53.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.805 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:07:53.805 22:06:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:53.805 00:07:53.805 --- 10.0.0.1 ping statistics --- 00:07:53.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.805 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:53.805 22:06:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.805 22:06:48 -- nvmf/common.sh@410 -- # return 0 00:07:53.805 22:06:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:53.805 22:06:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.805 22:06:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:53.805 22:06:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.805 22:06:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:53.805 22:06:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:53.805 22:06:48 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:53.805 22:06:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:53.805 22:06:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:53.805 22:06:48 -- common/autotest_common.sh@10 -- # set +x 00:07:53.805 22:06:48 -- nvmf/common.sh@469 -- # nvmfpid=3412035 00:07:53.806 22:06:48 -- nvmf/common.sh@470 -- # waitforlisten 3412035 00:07:53.806 22:06:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.806 22:06:48 -- common/autotest_common.sh@819 -- # '[' -z 3412035 ']' 00:07:53.806 22:06:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.806 22:06:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:53.806 22:06:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.806 22:06:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:53.806 22:06:48 -- common/autotest_common.sh@10 -- # set +x 00:07:53.806 [2024-07-24 22:06:48.924918] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:07:53.806 [2024-07-24 22:06:48.924959] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.064 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.064 [2024-07-24 22:06:48.984567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.064 [2024-07-24 22:06:49.023518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:54.064 [2024-07-24 22:06:49.023640] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.064 [2024-07-24 22:06:49.023649] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.064 [2024-07-24 22:06:49.023657] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.064 [2024-07-24 22:06:49.023699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.064 [2024-07-24 22:06:49.023798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.064 [2024-07-24 22:06:49.023884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.064 [2024-07-24 22:06:49.023885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.631 22:06:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:54.631 22:06:49 -- common/autotest_common.sh@852 -- # return 0 00:07:54.631 22:06:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:54.631 22:06:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:54.631 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.631 22:06:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.631 22:06:49 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.631 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.631 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 [2024-07-24 22:06:49.772540] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@26 -- # seq 1 4 00:07:54.891 22:06:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:54.891 22:06:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 Null1 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 [2024-07-24 22:06:49.818162] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:54.891 22:06:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 Null2 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:54.891 22:06:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 Null3 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:54.891 22:06:49 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 Null4 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:54.891 22:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.891 22:06:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 22:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.891 22:06:49 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:55.151 00:07:55.151 Discovery Log Number of Records 6, Generation counter 6 00:07:55.151 =====Discovery Log Entry 0====== 00:07:55.151 trtype: tcp 00:07:55.151 adrfam: ipv4 00:07:55.151 subtype: current discovery subsystem 00:07:55.151 treq: not required 00:07:55.151 portid: 0 00:07:55.151 trsvcid: 4420 00:07:55.151 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:55.151 traddr: 10.0.0.2 00:07:55.151 eflags: explicit discovery connections, duplicate discovery information 00:07:55.151 sectype: none 00:07:55.151 =====Discovery Log Entry 1====== 00:07:55.151 trtype: tcp 00:07:55.151 adrfam: ipv4 00:07:55.151 subtype: nvme subsystem 00:07:55.151 treq: not required 00:07:55.151 portid: 0 00:07:55.151 trsvcid: 4420 00:07:55.151 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:55.151 traddr: 10.0.0.2 00:07:55.151 eflags: none 00:07:55.151 sectype: none 00:07:55.151 =====Discovery Log Entry 2====== 00:07:55.151 trtype: tcp 00:07:55.151 adrfam: ipv4 00:07:55.151 subtype: nvme subsystem 00:07:55.151 treq: not required 00:07:55.151 portid: 0 00:07:55.151 trsvcid: 4420 00:07:55.151 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:55.151 traddr: 10.0.0.2 00:07:55.151 eflags: none 00:07:55.151 sectype: none 00:07:55.151 =====Discovery Log Entry 3====== 00:07:55.151 trtype: tcp 00:07:55.151 adrfam: ipv4 00:07:55.151 subtype: nvme subsystem 00:07:55.151 treq: not required 00:07:55.151 portid: 0 00:07:55.151 trsvcid: 4420 00:07:55.151 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:55.151 traddr: 10.0.0.2 00:07:55.151 eflags: none 00:07:55.151 sectype: none 00:07:55.151 =====Discovery Log Entry 4====== 00:07:55.151 trtype: tcp 00:07:55.151 adrfam: ipv4 00:07:55.151 subtype: nvme subsystem 00:07:55.151 treq: not required 00:07:55.151 portid: 0 00:07:55.151 trsvcid: 4420 00:07:55.151 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:55.151 traddr: 10.0.0.2 00:07:55.151 eflags: none 00:07:55.151 sectype: none 00:07:55.151 =====Discovery Log Entry 5====== 00:07:55.151 trtype: tcp 00:07:55.151 adrfam: ipv4 00:07:55.151 subtype: discovery subsystem referral 00:07:55.151 treq: not required 00:07:55.151 portid: 0 00:07:55.151 trsvcid: 4430 00:07:55.151 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:55.151 traddr: 10.0.0.2 00:07:55.151 eflags: none 00:07:55.151 sectype: none 00:07:55.151 22:06:50 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:55.151 Perform nvmf subsystem discovery via RPC 00:07:55.151 22:06:50 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:55.151 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.151 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.151 [2024-07-24 22:06:50.091067] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:55.151 [ 00:07:55.151 { 00:07:55.151 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:55.151 "subtype": "Discovery", 00:07:55.151 "listen_addresses": [ 00:07:55.151 { 00:07:55.151 "transport": "TCP", 00:07:55.151 "trtype": "TCP", 00:07:55.151 "adrfam": "IPv4", 00:07:55.151 "traddr": "10.0.0.2", 00:07:55.151 "trsvcid": "4420" 00:07:55.151 } 00:07:55.151 ], 00:07:55.151 "allow_any_host": true, 00:07:55.152 "hosts": [] 00:07:55.152 }, 00:07:55.152 { 00:07:55.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:55.152 "subtype": "NVMe", 00:07:55.152 "listen_addresses": [ 00:07:55.152 { 00:07:55.152 "transport": "TCP", 00:07:55.152 "trtype": "TCP", 00:07:55.152 "adrfam": "IPv4", 00:07:55.152 "traddr": "10.0.0.2", 00:07:55.152 "trsvcid": "4420" 00:07:55.152 } 00:07:55.152 ], 00:07:55.152 "allow_any_host": true, 00:07:55.152 "hosts": [], 00:07:55.152 "serial_number": "SPDK00000000000001", 00:07:55.152 "model_number": "SPDK bdev Controller", 00:07:55.152 "max_namespaces": 32, 00:07:55.152 "min_cntlid": 1, 00:07:55.152 "max_cntlid": 65519, 00:07:55.152 "namespaces": [ 00:07:55.152 { 00:07:55.152 "nsid": 1, 00:07:55.152 "bdev_name": "Null1", 00:07:55.152 "name": "Null1", 00:07:55.152 "nguid": "F693AFBAE6FF40FC938408C1908A1193", 00:07:55.152 "uuid": "f693afba-e6ff-40fc-9384-08c1908a1193" 00:07:55.152 } 00:07:55.152 ] 00:07:55.152 }, 00:07:55.152 { 00:07:55.152 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:55.152 "subtype": "NVMe", 00:07:55.152 "listen_addresses": [ 00:07:55.152 { 00:07:55.152 "transport": "TCP", 00:07:55.152 "trtype": "TCP", 00:07:55.152 "adrfam": "IPv4", 00:07:55.152 "traddr": "10.0.0.2", 00:07:55.152 "trsvcid": "4420" 00:07:55.152 } 00:07:55.152 ], 00:07:55.152 "allow_any_host": true, 00:07:55.152 "hosts": [], 00:07:55.152 "serial_number": "SPDK00000000000002", 00:07:55.152 "model_number": "SPDK bdev Controller", 00:07:55.152 "max_namespaces": 32, 00:07:55.152 "min_cntlid": 1, 00:07:55.152 "max_cntlid": 65519, 00:07:55.152 "namespaces": [ 00:07:55.152 { 00:07:55.152 "nsid": 1, 00:07:55.152 "bdev_name": "Null2", 00:07:55.152 "name": "Null2", 00:07:55.152 "nguid": "33EF98A88BBD4E85B9A319E242620085", 00:07:55.152 "uuid": "33ef98a8-8bbd-4e85-b9a3-19e242620085" 00:07:55.152 } 00:07:55.152 ] 00:07:55.152 }, 00:07:55.152 { 00:07:55.152 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:55.152 "subtype": "NVMe", 00:07:55.152 "listen_addresses": [ 00:07:55.152 { 00:07:55.152 "transport": "TCP", 00:07:55.152 "trtype": "TCP", 00:07:55.152 "adrfam": "IPv4", 00:07:55.152 "traddr": "10.0.0.2", 00:07:55.152 "trsvcid": "4420" 00:07:55.152 } 00:07:55.152 ], 00:07:55.152 "allow_any_host": true, 00:07:55.152 "hosts": [], 00:07:55.152 "serial_number": "SPDK00000000000003", 00:07:55.152 "model_number": "SPDK bdev Controller", 00:07:55.152 "max_namespaces": 32, 00:07:55.152 "min_cntlid": 1, 00:07:55.152 "max_cntlid": 65519, 00:07:55.152 "namespaces": [ 00:07:55.152 { 00:07:55.152 "nsid": 1, 00:07:55.152 "bdev_name": "Null3", 00:07:55.152 "name": "Null3", 00:07:55.152 "nguid": "DF64A4650AA549C9BFED9B76EA7AB1ED", 00:07:55.152 "uuid": "df64a465-0aa5-49c9-bfed-9b76ea7ab1ed" 00:07:55.152 } 00:07:55.152 ] 00:07:55.152 }, 00:07:55.152 { 00:07:55.152 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:55.152 "subtype": "NVMe", 00:07:55.152 "listen_addresses": [ 00:07:55.152 { 00:07:55.152 "transport": "TCP", 00:07:55.152 "trtype": "TCP", 00:07:55.152 "adrfam": "IPv4", 00:07:55.152 "traddr": "10.0.0.2", 00:07:55.152 "trsvcid": "4420" 00:07:55.152 } 00:07:55.152 ], 00:07:55.152 "allow_any_host": true, 00:07:55.152 "hosts": [], 00:07:55.152 "serial_number": "SPDK00000000000004", 00:07:55.152 "model_number": "SPDK bdev Controller", 00:07:55.152 "max_namespaces": 32, 00:07:55.152 "min_cntlid": 1, 00:07:55.152 "max_cntlid": 65519, 00:07:55.152 "namespaces": [ 00:07:55.152 { 00:07:55.152 "nsid": 1, 00:07:55.152 "bdev_name": "Null4", 00:07:55.152 "name": "Null4", 00:07:55.152 "nguid": "3703618605FD489D813CBFD8E6D7D26A", 00:07:55.152 "uuid": "37036186-05fd-489d-813c-bfd8e6d7d26a" 00:07:55.152 } 00:07:55.152 ] 00:07:55.152 } 00:07:55.152 ] 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@42 -- # seq 1 4 00:07:55.152 22:06:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:55.152 22:06:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:55.152 22:06:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:55.152 22:06:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:55.152 22:06:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:55.152 22:06:50 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:55.152 22:06:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:55.152 22:06:50 -- common/autotest_common.sh@10 -- # set +x 00:07:55.152 22:06:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:55.152 22:06:50 -- target/discovery.sh@49 -- # check_bdevs= 00:07:55.152 22:06:50 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:55.152 22:06:50 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:55.152 22:06:50 -- target/discovery.sh@57 -- # nvmftestfini 00:07:55.152 22:06:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:55.152 22:06:50 -- nvmf/common.sh@116 -- # sync 00:07:55.152 22:06:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:55.152 22:06:50 -- nvmf/common.sh@119 -- # set +e 00:07:55.152 22:06:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:55.152 22:06:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:55.152 rmmod nvme_tcp 00:07:55.152 rmmod nvme_fabrics 00:07:55.152 rmmod nvme_keyring 00:07:55.413 22:06:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:55.413 22:06:50 -- nvmf/common.sh@123 -- # set -e 00:07:55.413 22:06:50 -- nvmf/common.sh@124 -- # return 0 00:07:55.413 22:06:50 -- nvmf/common.sh@477 -- # '[' -n 3412035 ']' 00:07:55.413 22:06:50 -- nvmf/common.sh@478 -- # killprocess 3412035 00:07:55.413 22:06:50 -- common/autotest_common.sh@926 -- # '[' -z 3412035 ']' 00:07:55.413 22:06:50 -- common/autotest_common.sh@930 -- # kill -0 3412035 00:07:55.413 22:06:50 -- common/autotest_common.sh@931 -- # uname 00:07:55.413 22:06:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:55.413 22:06:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3412035 00:07:55.413 22:06:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:55.413 22:06:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:55.413 22:06:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3412035' 00:07:55.413 killing process with pid 3412035 00:07:55.413 22:06:50 -- common/autotest_common.sh@945 -- # kill 3412035 00:07:55.413 [2024-07-24 22:06:50.350997] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:55.413 22:06:50 -- common/autotest_common.sh@950 -- # wait 3412035 00:07:55.413 22:06:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:55.413 22:06:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:55.413 22:06:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:55.413 22:06:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.413 22:06:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:55.413 22:06:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.413 22:06:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.413 22:06:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.952 22:06:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:57.952 00:07:57.952 real 0m9.376s 00:07:57.952 user 0m7.600s 00:07:57.952 sys 0m4.542s 00:07:57.952 22:06:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.952 22:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 ************************************ 00:07:57.952 END TEST nvmf_discovery 00:07:57.952 ************************************ 00:07:57.952 22:06:52 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:57.952 22:06:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:57.952 22:06:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.952 22:06:52 -- common/autotest_common.sh@10 -- # set +x 00:07:57.952 ************************************ 00:07:57.952 START TEST nvmf_referrals 00:07:57.952 ************************************ 00:07:57.952 22:06:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:57.952 * Looking for test storage... 00:07:57.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.953 22:06:52 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.953 22:06:52 -- nvmf/common.sh@7 -- # uname -s 00:07:57.953 22:06:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.953 22:06:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.953 22:06:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.953 22:06:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.953 22:06:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:57.953 22:06:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:57.953 22:06:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.953 22:06:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:57.953 22:06:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.953 22:06:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:57.953 22:06:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.953 22:06:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:57.953 22:06:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.953 22:06:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:57.953 22:06:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:57.953 22:06:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.953 22:06:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.953 22:06:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.953 22:06:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.953 22:06:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.953 22:06:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.953 22:06:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.953 22:06:52 -- paths/export.sh@5 -- # export PATH 00:07:57.953 22:06:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.953 22:06:52 -- nvmf/common.sh@46 -- # : 0 00:07:57.953 22:06:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:57.953 22:06:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:57.953 22:06:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:57.953 22:06:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.953 22:06:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.953 22:06:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:57.953 22:06:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:57.953 22:06:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:57.953 22:06:52 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:57.953 22:06:52 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:57.953 22:06:52 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:57.953 22:06:52 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:57.953 22:06:52 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:57.953 22:06:52 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:57.953 22:06:52 -- target/referrals.sh@37 -- # nvmftestinit 00:07:57.953 22:06:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:57.953 22:06:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.953 22:06:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:57.953 22:06:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:57.953 22:06:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:57.953 22:06:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:57.953 22:06:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:57.953 22:06:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.953 22:06:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:57.953 22:06:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:57.953 22:06:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:57.953 22:06:52 -- common/autotest_common.sh@10 -- # set +x 00:08:03.310 22:06:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:03.310 22:06:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:03.310 22:06:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:03.310 22:06:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:03.310 22:06:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:03.310 22:06:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:03.310 22:06:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:03.310 22:06:57 -- nvmf/common.sh@294 -- # net_devs=() 00:08:03.310 22:06:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:03.310 22:06:57 -- nvmf/common.sh@295 -- # e810=() 00:08:03.310 22:06:57 -- nvmf/common.sh@295 -- # local -ga e810 00:08:03.310 22:06:57 -- nvmf/common.sh@296 -- # x722=() 00:08:03.310 22:06:57 -- nvmf/common.sh@296 -- # local -ga x722 00:08:03.310 22:06:57 -- nvmf/common.sh@297 -- # mlx=() 00:08:03.310 22:06:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:03.310 22:06:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.310 22:06:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:03.310 22:06:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:03.310 22:06:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:03.310 22:06:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:03.310 22:06:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:03.310 22:06:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:03.310 22:06:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:03.310 22:06:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:03.310 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:03.310 22:06:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:03.311 22:06:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:03.311 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:03.311 22:06:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:03.311 22:06:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:03.311 22:06:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.311 22:06:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:03.311 22:06:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.311 22:06:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:03.311 Found net devices under 0000:86:00.0: cvl_0_0 00:08:03.311 22:06:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.311 22:06:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:03.311 22:06:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.311 22:06:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:03.311 22:06:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.311 22:06:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:03.311 Found net devices under 0000:86:00.1: cvl_0_1 00:08:03.311 22:06:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.311 22:06:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:03.311 22:06:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:03.311 22:06:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:03.311 22:06:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.311 22:06:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.311 22:06:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.311 22:06:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:03.311 22:06:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.311 22:06:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.311 22:06:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:03.311 22:06:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.311 22:06:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.311 22:06:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:03.311 22:06:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:03.311 22:06:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.311 22:06:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.311 22:06:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.311 22:06:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.311 22:06:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:03.311 22:06:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.311 22:06:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.311 22:06:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.311 22:06:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:03.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:08:03.311 00:08:03.311 --- 10.0.0.2 ping statistics --- 00:08:03.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.311 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:03.311 22:06:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:08:03.311 00:08:03.311 --- 10.0.0.1 ping statistics --- 00:08:03.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.311 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:08:03.311 22:06:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.311 22:06:57 -- nvmf/common.sh@410 -- # return 0 00:08:03.311 22:06:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:03.311 22:06:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.311 22:06:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:03.311 22:06:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.311 22:06:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:03.311 22:06:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:03.311 22:06:57 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:03.311 22:06:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:03.311 22:06:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:03.311 22:06:57 -- common/autotest_common.sh@10 -- # set +x 00:08:03.311 22:06:57 -- nvmf/common.sh@469 -- # nvmfpid=3415684 00:08:03.311 22:06:57 -- nvmf/common.sh@470 -- # waitforlisten 3415684 00:08:03.311 22:06:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.311 22:06:57 -- common/autotest_common.sh@819 -- # '[' -z 3415684 ']' 00:08:03.311 22:06:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.311 22:06:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:03.311 22:06:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.311 22:06:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:03.311 22:06:57 -- common/autotest_common.sh@10 -- # set +x 00:08:03.311 [2024-07-24 22:06:57.936390] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:03.311 [2024-07-24 22:06:57.936438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.311 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.311 [2024-07-24 22:06:57.996745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.311 [2024-07-24 22:06:58.037976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:03.311 [2024-07-24 22:06:58.038095] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.311 [2024-07-24 22:06:58.038103] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.311 [2024-07-24 22:06:58.038110] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.311 [2024-07-24 22:06:58.038153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.311 [2024-07-24 22:06:58.038275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.311 [2024-07-24 22:06:58.038360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.311 [2024-07-24 22:06:58.038360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.880 22:06:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:03.880 22:06:58 -- common/autotest_common.sh@852 -- # return 0 00:08:03.880 22:06:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:03.880 22:06:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:03.880 22:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 22:06:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.880 22:06:58 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.880 22:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.880 22:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 [2024-07-24 22:06:58.775484] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.880 22:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.880 22:06:58 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:03.880 22:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.880 22:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 [2024-07-24 22:06:58.789005] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:03.880 22:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.880 22:06:58 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:03.880 22:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.880 22:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 22:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.880 22:06:58 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:03.880 22:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.880 22:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 22:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.880 22:06:58 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:03.880 22:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.880 22:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 22:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.880 22:06:58 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.880 22:06:58 -- target/referrals.sh@48 -- # jq length 00:08:03.880 22:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.880 22:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 22:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.880 22:06:58 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:03.880 22:06:58 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:03.880 22:06:58 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:03.880 22:06:58 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:03.880 22:06:58 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.880 22:06:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.880 22:06:58 -- target/referrals.sh@21 -- # sort 00:08:03.880 22:06:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.880 22:06:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:03.880 22:06:58 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:03.880 22:06:58 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:03.880 22:06:58 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:03.880 22:06:58 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:03.880 22:06:58 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:03.880 22:06:58 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.880 22:06:58 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:03.880 22:06:58 -- target/referrals.sh@26 -- # sort 00:08:03.880 22:06:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:03.880 22:06:59 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:03.881 22:06:59 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:03.881 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:03.881 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.140 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.140 22:06:59 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:04.140 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.140 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.140 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.140 22:06:59 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:04.140 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.140 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.140 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.140 22:06:59 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.140 22:06:59 -- target/referrals.sh@56 -- # jq length 00:08:04.140 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.140 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.140 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.140 22:06:59 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:04.140 22:06:59 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:04.140 22:06:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.140 22:06:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.140 22:06:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.141 22:06:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.141 22:06:59 -- target/referrals.sh@26 -- # sort 00:08:04.141 22:06:59 -- target/referrals.sh@26 -- # echo 00:08:04.141 22:06:59 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:04.141 22:06:59 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:04.141 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.141 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.141 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.141 22:06:59 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:04.141 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.141 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.141 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.141 22:06:59 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:04.141 22:06:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.141 22:06:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.141 22:06:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.141 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.141 22:06:59 -- target/referrals.sh@21 -- # sort 00:08:04.141 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.141 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.141 22:06:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:04.141 22:06:59 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:04.141 22:06:59 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:04.141 22:06:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.141 22:06:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.141 22:06:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.141 22:06:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.141 22:06:59 -- target/referrals.sh@26 -- # sort 00:08:04.400 22:06:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:04.400 22:06:59 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:04.400 22:06:59 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:04.400 22:06:59 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:04.400 22:06:59 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:04.400 22:06:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.400 22:06:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:04.400 22:06:59 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:04.400 22:06:59 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:04.400 22:06:59 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:04.400 22:06:59 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:04.400 22:06:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.400 22:06:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:04.660 22:06:59 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:04.660 22:06:59 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:04.660 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.660 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.660 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.660 22:06:59 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:04.660 22:06:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.660 22:06:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.660 22:06:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.660 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.660 22:06:59 -- target/referrals.sh@21 -- # sort 00:08:04.660 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.660 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.660 22:06:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:04.660 22:06:59 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:04.660 22:06:59 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:04.660 22:06:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.660 22:06:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.660 22:06:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.660 22:06:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.660 22:06:59 -- target/referrals.sh@26 -- # sort 00:08:04.660 22:06:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:04.660 22:06:59 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:04.660 22:06:59 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:04.660 22:06:59 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:04.660 22:06:59 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:04.660 22:06:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.660 22:06:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:04.660 22:06:59 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:04.660 22:06:59 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:04.660 22:06:59 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:04.660 22:06:59 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:04.660 22:06:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.660 22:06:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:04.919 22:06:59 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:04.919 22:06:59 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:04.919 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.919 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.919 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.919 22:06:59 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.919 22:06:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.919 22:06:59 -- common/autotest_common.sh@10 -- # set +x 00:08:04.920 22:06:59 -- target/referrals.sh@82 -- # jq length 00:08:04.920 22:06:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.920 22:06:59 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:04.920 22:06:59 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:04.920 22:06:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.920 22:06:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.920 22:06:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.920 22:06:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.920 22:06:59 -- target/referrals.sh@26 -- # sort 00:08:04.920 22:07:00 -- target/referrals.sh@26 -- # echo 00:08:04.920 22:07:00 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:04.920 22:07:00 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:04.920 22:07:00 -- target/referrals.sh@86 -- # nvmftestfini 00:08:04.920 22:07:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:04.920 22:07:00 -- nvmf/common.sh@116 -- # sync 00:08:04.920 22:07:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:04.920 22:07:00 -- nvmf/common.sh@119 -- # set +e 00:08:04.920 22:07:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:04.920 22:07:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:04.920 rmmod nvme_tcp 00:08:04.920 rmmod nvme_fabrics 00:08:05.179 rmmod nvme_keyring 00:08:05.179 22:07:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:05.179 22:07:00 -- nvmf/common.sh@123 -- # set -e 00:08:05.179 22:07:00 -- nvmf/common.sh@124 -- # return 0 00:08:05.179 22:07:00 -- nvmf/common.sh@477 -- # '[' -n 3415684 ']' 00:08:05.179 22:07:00 -- nvmf/common.sh@478 -- # killprocess 3415684 00:08:05.179 22:07:00 -- common/autotest_common.sh@926 -- # '[' -z 3415684 ']' 00:08:05.179 22:07:00 -- common/autotest_common.sh@930 -- # kill -0 3415684 00:08:05.179 22:07:00 -- common/autotest_common.sh@931 -- # uname 00:08:05.179 22:07:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:05.179 22:07:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3415684 00:08:05.179 22:07:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:05.179 22:07:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:05.179 22:07:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3415684' 00:08:05.179 killing process with pid 3415684 00:08:05.179 22:07:00 -- common/autotest_common.sh@945 -- # kill 3415684 00:08:05.179 22:07:00 -- common/autotest_common.sh@950 -- # wait 3415684 00:08:05.179 22:07:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:05.179 22:07:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:05.179 22:07:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:05.179 22:07:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.179 22:07:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:05.179 22:07:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.179 22:07:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.179 22:07:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.717 22:07:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:07.717 00:08:07.717 real 0m9.726s 00:08:07.717 user 0m11.097s 00:08:07.717 sys 0m4.350s 00:08:07.717 22:07:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.717 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:08:07.717 ************************************ 00:08:07.717 END TEST nvmf_referrals 00:08:07.717 ************************************ 00:08:07.717 22:07:02 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:07.717 22:07:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:07.717 22:07:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.717 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:08:07.717 ************************************ 00:08:07.717 START TEST nvmf_connect_disconnect 00:08:07.717 ************************************ 00:08:07.717 22:07:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:07.717 * Looking for test storage... 00:08:07.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.717 22:07:02 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.717 22:07:02 -- nvmf/common.sh@7 -- # uname -s 00:08:07.717 22:07:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.717 22:07:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.717 22:07:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.717 22:07:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.717 22:07:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.717 22:07:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.717 22:07:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.717 22:07:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.717 22:07:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.717 22:07:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.717 22:07:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:07.717 22:07:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:07.717 22:07:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.717 22:07:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.717 22:07:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.717 22:07:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.717 22:07:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.717 22:07:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.717 22:07:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.717 22:07:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.717 22:07:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.717 22:07:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.717 22:07:02 -- paths/export.sh@5 -- # export PATH 00:08:07.717 22:07:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.717 22:07:02 -- nvmf/common.sh@46 -- # : 0 00:08:07.717 22:07:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:07.717 22:07:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:07.717 22:07:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:07.717 22:07:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.717 22:07:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.717 22:07:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:07.717 22:07:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:07.717 22:07:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:07.717 22:07:02 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.717 22:07:02 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.717 22:07:02 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:07.717 22:07:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:07.717 22:07:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.717 22:07:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:07.717 22:07:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:07.717 22:07:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:07.717 22:07:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.717 22:07:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.717 22:07:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.717 22:07:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:07.717 22:07:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:07.717 22:07:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:07.717 22:07:02 -- common/autotest_common.sh@10 -- # set +x 00:08:12.990 22:07:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:12.990 22:07:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:12.990 22:07:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:12.990 22:07:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:12.990 22:07:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:12.990 22:07:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:12.990 22:07:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:12.990 22:07:07 -- nvmf/common.sh@294 -- # net_devs=() 00:08:12.990 22:07:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:12.990 22:07:07 -- nvmf/common.sh@295 -- # e810=() 00:08:12.990 22:07:07 -- nvmf/common.sh@295 -- # local -ga e810 00:08:12.990 22:07:07 -- nvmf/common.sh@296 -- # x722=() 00:08:12.990 22:07:07 -- nvmf/common.sh@296 -- # local -ga x722 00:08:12.990 22:07:07 -- nvmf/common.sh@297 -- # mlx=() 00:08:12.990 22:07:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:12.990 22:07:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.990 22:07:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:12.990 22:07:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:12.990 22:07:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:12.990 22:07:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:12.990 22:07:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:12.990 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:12.990 22:07:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:12.990 22:07:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:12.990 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:12.990 22:07:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:12.990 22:07:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:12.990 22:07:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.990 22:07:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:12.990 22:07:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.990 22:07:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:12.990 Found net devices under 0000:86:00.0: cvl_0_0 00:08:12.990 22:07:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.990 22:07:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:12.990 22:07:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.990 22:07:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:12.990 22:07:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.990 22:07:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:12.990 Found net devices under 0000:86:00.1: cvl_0_1 00:08:12.990 22:07:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.990 22:07:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:12.990 22:07:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:12.990 22:07:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:12.990 22:07:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.990 22:07:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.990 22:07:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.990 22:07:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:12.990 22:07:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.990 22:07:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.990 22:07:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:12.990 22:07:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.990 22:07:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.990 22:07:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:12.990 22:07:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:12.990 22:07:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.990 22:07:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.990 22:07:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.990 22:07:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.990 22:07:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:12.990 22:07:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.990 22:07:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.990 22:07:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.990 22:07:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:12.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:08:12.990 00:08:12.990 --- 10.0.0.2 ping statistics --- 00:08:12.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.990 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:08:12.990 22:07:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:08:12.990 00:08:12.990 --- 10.0.0.1 ping statistics --- 00:08:12.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.990 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:08:12.990 22:07:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.990 22:07:07 -- nvmf/common.sh@410 -- # return 0 00:08:12.990 22:07:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:12.990 22:07:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.990 22:07:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:12.990 22:07:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.990 22:07:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:12.990 22:07:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:12.991 22:07:07 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:12.991 22:07:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:12.991 22:07:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:12.991 22:07:07 -- common/autotest_common.sh@10 -- # set +x 00:08:12.991 22:07:07 -- nvmf/common.sh@469 -- # nvmfpid=3419489 00:08:12.991 22:07:07 -- nvmf/common.sh@470 -- # waitforlisten 3419489 00:08:12.991 22:07:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:12.991 22:07:07 -- common/autotest_common.sh@819 -- # '[' -z 3419489 ']' 00:08:12.991 22:07:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.991 22:07:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:12.991 22:07:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.991 22:07:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:12.991 22:07:07 -- common/autotest_common.sh@10 -- # set +x 00:08:12.991 [2024-07-24 22:07:07.430099] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:08:12.991 [2024-07-24 22:07:07.430148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.991 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.991 [2024-07-24 22:07:07.492830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.991 [2024-07-24 22:07:07.534502] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:12.991 [2024-07-24 22:07:07.534619] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.991 [2024-07-24 22:07:07.534627] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.991 [2024-07-24 22:07:07.534634] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.991 [2024-07-24 22:07:07.534679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.991 [2024-07-24 22:07:07.534779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.991 [2024-07-24 22:07:07.534846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:12.991 [2024-07-24 22:07:07.534847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.251 22:07:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:13.251 22:07:08 -- common/autotest_common.sh@852 -- # return 0 00:08:13.251 22:07:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:13.251 22:07:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:13.251 22:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:13.251 22:07:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:13.251 22:07:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.251 22:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:13.251 [2024-07-24 22:07:08.275523] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.251 22:07:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:13.251 22:07:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.251 22:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:13.251 22:07:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:13.251 22:07:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.251 22:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:13.251 22:07:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:13.251 22:07:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.251 22:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:13.251 22:07:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.251 22:07:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:13.251 22:07:08 -- common/autotest_common.sh@10 -- # set +x 00:08:13.251 [2024-07-24 22:07:08.327428] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.251 22:07:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:13.251 22:07:08 -- target/connect_disconnect.sh@34 -- # set +x 00:08:15.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.484 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.701 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.430 22:10:57 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:02.430 22:10:57 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:02.430 22:10:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:02.430 22:10:57 -- nvmf/common.sh@116 -- # sync 00:12:02.430 22:10:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:02.430 22:10:57 -- nvmf/common.sh@119 -- # set +e 00:12:02.430 22:10:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:02.430 22:10:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:02.430 rmmod nvme_tcp 00:12:02.430 rmmod nvme_fabrics 00:12:02.430 rmmod nvme_keyring 00:12:02.430 22:10:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:02.430 22:10:57 -- nvmf/common.sh@123 -- # set -e 00:12:02.430 22:10:57 -- nvmf/common.sh@124 -- # return 0 00:12:02.430 22:10:57 -- nvmf/common.sh@477 -- # '[' -n 3419489 ']' 00:12:02.430 22:10:57 -- nvmf/common.sh@478 -- # killprocess 3419489 00:12:02.430 22:10:57 -- common/autotest_common.sh@926 -- # '[' -z 3419489 ']' 00:12:02.430 22:10:57 -- common/autotest_common.sh@930 -- # kill -0 3419489 00:12:02.430 22:10:57 -- common/autotest_common.sh@931 -- # uname 00:12:02.430 22:10:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:02.430 22:10:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3419489 00:12:02.430 22:10:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:02.430 22:10:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:02.430 22:10:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3419489' 00:12:02.430 killing process with pid 3419489 00:12:02.430 22:10:57 -- common/autotest_common.sh@945 -- # kill 3419489 00:12:02.430 22:10:57 -- common/autotest_common.sh@950 -- # wait 3419489 00:12:02.430 22:10:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:02.430 22:10:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:02.430 22:10:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:02.430 22:10:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.430 22:10:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:02.430 22:10:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.430 22:10:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.430 22:10:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.370 22:10:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:04.370 00:12:04.370 real 3m57.063s 00:12:04.370 user 15m11.708s 00:12:04.370 sys 0m17.114s 00:12:04.370 22:10:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:04.370 22:10:59 -- common/autotest_common.sh@10 -- # set +x 00:12:04.370 ************************************ 00:12:04.370 END TEST nvmf_connect_disconnect 00:12:04.370 ************************************ 00:12:04.370 22:10:59 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:04.370 22:10:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:04.370 22:10:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:04.370 22:10:59 -- common/autotest_common.sh@10 -- # set +x 00:12:04.630 ************************************ 00:12:04.630 START TEST nvmf_multitarget 00:12:04.630 ************************************ 00:12:04.630 22:10:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:04.630 * Looking for test storage... 00:12:04.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.630 22:10:59 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.630 22:10:59 -- nvmf/common.sh@7 -- # uname -s 00:12:04.630 22:10:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.630 22:10:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.630 22:10:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.630 22:10:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.630 22:10:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.630 22:10:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.630 22:10:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.630 22:10:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.630 22:10:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.630 22:10:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.630 22:10:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:04.630 22:10:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:04.630 22:10:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.630 22:10:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.630 22:10:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.630 22:10:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.630 22:10:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.630 22:10:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.630 22:10:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.630 22:10:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.630 22:10:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.631 22:10:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.631 22:10:59 -- paths/export.sh@5 -- # export PATH 00:12:04.631 22:10:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.631 22:10:59 -- nvmf/common.sh@46 -- # : 0 00:12:04.631 22:10:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:04.631 22:10:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:04.631 22:10:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:04.631 22:10:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.631 22:10:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.631 22:10:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:04.631 22:10:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:04.631 22:10:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:04.631 22:10:59 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:04.631 22:10:59 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:04.631 22:10:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:04.631 22:10:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.631 22:10:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:04.631 22:10:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:04.631 22:10:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:04.631 22:10:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.631 22:10:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.631 22:10:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.631 22:10:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:04.631 22:10:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:04.631 22:10:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:04.631 22:10:59 -- common/autotest_common.sh@10 -- # set +x 00:12:09.916 22:11:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:09.916 22:11:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:09.916 22:11:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:09.916 22:11:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:09.916 22:11:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:09.916 22:11:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:09.916 22:11:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:09.916 22:11:04 -- nvmf/common.sh@294 -- # net_devs=() 00:12:09.916 22:11:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:09.916 22:11:04 -- nvmf/common.sh@295 -- # e810=() 00:12:09.916 22:11:04 -- nvmf/common.sh@295 -- # local -ga e810 00:12:09.916 22:11:04 -- nvmf/common.sh@296 -- # x722=() 00:12:09.916 22:11:04 -- nvmf/common.sh@296 -- # local -ga x722 00:12:09.916 22:11:04 -- nvmf/common.sh@297 -- # mlx=() 00:12:09.916 22:11:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:09.916 22:11:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.916 22:11:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:09.916 22:11:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:09.916 22:11:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:09.916 22:11:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:09.916 22:11:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:09.916 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:09.916 22:11:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:09.916 22:11:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:09.916 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:09.916 22:11:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:09.916 22:11:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:09.916 22:11:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.916 22:11:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:09.916 22:11:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.916 22:11:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:09.916 Found net devices under 0000:86:00.0: cvl_0_0 00:12:09.916 22:11:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.916 22:11:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:09.916 22:11:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.916 22:11:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:09.916 22:11:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.916 22:11:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:09.916 Found net devices under 0000:86:00.1: cvl_0_1 00:12:09.916 22:11:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.916 22:11:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:09.916 22:11:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:09.916 22:11:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:09.916 22:11:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.916 22:11:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.916 22:11:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.916 22:11:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:09.916 22:11:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.916 22:11:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.916 22:11:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:09.916 22:11:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.916 22:11:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.916 22:11:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:09.916 22:11:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:09.916 22:11:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.916 22:11:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.916 22:11:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.916 22:11:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.916 22:11:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:09.916 22:11:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.916 22:11:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.916 22:11:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.916 22:11:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:09.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:12:09.916 00:12:09.916 --- 10.0.0.2 ping statistics --- 00:12:09.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.916 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:12:09.916 22:11:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:12:09.916 00:12:09.916 --- 10.0.0.1 ping statistics --- 00:12:09.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.916 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:09.916 22:11:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.916 22:11:04 -- nvmf/common.sh@410 -- # return 0 00:12:09.916 22:11:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:09.916 22:11:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.916 22:11:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:09.916 22:11:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.917 22:11:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:09.917 22:11:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:09.917 22:11:04 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:09.917 22:11:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:09.917 22:11:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:09.917 22:11:04 -- common/autotest_common.sh@10 -- # set +x 00:12:09.917 22:11:04 -- nvmf/common.sh@469 -- # nvmfpid=3463499 00:12:09.917 22:11:05 -- nvmf/common.sh@470 -- # waitforlisten 3463499 00:12:09.917 22:11:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.917 22:11:05 -- common/autotest_common.sh@819 -- # '[' -z 3463499 ']' 00:12:09.917 22:11:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.917 22:11:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:09.917 22:11:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.917 22:11:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:09.917 22:11:05 -- common/autotest_common.sh@10 -- # set +x 00:12:09.917 [2024-07-24 22:11:05.046873] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:09.917 [2024-07-24 22:11:05.046916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.177 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.177 [2024-07-24 22:11:05.106290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.177 [2024-07-24 22:11:05.144471] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:10.177 [2024-07-24 22:11:05.144588] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.177 [2024-07-24 22:11:05.144596] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.177 [2024-07-24 22:11:05.144604] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.177 [2024-07-24 22:11:05.144700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.177 [2024-07-24 22:11:05.144799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.177 [2024-07-24 22:11:05.144863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.177 [2024-07-24 22:11:05.144864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.746 22:11:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:10.746 22:11:05 -- common/autotest_common.sh@852 -- # return 0 00:12:10.746 22:11:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:10.746 22:11:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:10.746 22:11:05 -- common/autotest_common.sh@10 -- # set +x 00:12:11.006 22:11:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.006 22:11:05 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:11.006 22:11:05 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.006 22:11:05 -- target/multitarget.sh@21 -- # jq length 00:12:11.006 22:11:05 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:11.006 22:11:05 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:11.006 "nvmf_tgt_1" 00:12:11.006 22:11:06 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:11.266 "nvmf_tgt_2" 00:12:11.266 22:11:06 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.266 22:11:06 -- target/multitarget.sh@28 -- # jq length 00:12:11.266 22:11:06 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:11.266 22:11:06 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:11.266 true 00:12:11.266 22:11:06 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:11.526 true 00:12:11.526 22:11:06 -- target/multitarget.sh@35 -- # jq length 00:12:11.526 22:11:06 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:11.526 22:11:06 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:11.526 22:11:06 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:11.526 22:11:06 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:11.526 22:11:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:11.526 22:11:06 -- nvmf/common.sh@116 -- # sync 00:12:11.526 22:11:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:11.526 22:11:06 -- nvmf/common.sh@119 -- # set +e 00:12:11.526 22:11:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:11.526 22:11:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:11.526 rmmod nvme_tcp 00:12:11.526 rmmod nvme_fabrics 00:12:11.526 rmmod nvme_keyring 00:12:11.526 22:11:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:11.526 22:11:06 -- nvmf/common.sh@123 -- # set -e 00:12:11.526 22:11:06 -- nvmf/common.sh@124 -- # return 0 00:12:11.526 22:11:06 -- nvmf/common.sh@477 -- # '[' -n 3463499 ']' 00:12:11.526 22:11:06 -- nvmf/common.sh@478 -- # killprocess 3463499 00:12:11.526 22:11:06 -- common/autotest_common.sh@926 -- # '[' -z 3463499 ']' 00:12:11.526 22:11:06 -- common/autotest_common.sh@930 -- # kill -0 3463499 00:12:11.526 22:11:06 -- common/autotest_common.sh@931 -- # uname 00:12:11.526 22:11:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:11.526 22:11:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3463499 00:12:11.786 22:11:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:11.786 22:11:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:11.786 22:11:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3463499' 00:12:11.786 killing process with pid 3463499 00:12:11.786 22:11:06 -- common/autotest_common.sh@945 -- # kill 3463499 00:12:11.786 22:11:06 -- common/autotest_common.sh@950 -- # wait 3463499 00:12:11.786 22:11:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:11.786 22:11:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:11.786 22:11:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:11.786 22:11:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:11.786 22:11:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:11.786 22:11:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.786 22:11:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:11.786 22:11:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.327 22:11:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:14.327 00:12:14.327 real 0m9.442s 00:12:14.327 user 0m9.123s 00:12:14.327 sys 0m4.470s 00:12:14.327 22:11:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.327 22:11:08 -- common/autotest_common.sh@10 -- # set +x 00:12:14.327 ************************************ 00:12:14.327 END TEST nvmf_multitarget 00:12:14.327 ************************************ 00:12:14.327 22:11:08 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:14.327 22:11:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:14.327 22:11:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:14.327 22:11:08 -- common/autotest_common.sh@10 -- # set +x 00:12:14.327 ************************************ 00:12:14.327 START TEST nvmf_rpc 00:12:14.327 ************************************ 00:12:14.327 22:11:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:14.327 * Looking for test storage... 00:12:14.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.327 22:11:09 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.327 22:11:09 -- nvmf/common.sh@7 -- # uname -s 00:12:14.327 22:11:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.327 22:11:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.327 22:11:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.327 22:11:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.327 22:11:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.327 22:11:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.327 22:11:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.327 22:11:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.327 22:11:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.327 22:11:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.327 22:11:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.327 22:11:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:14.327 22:11:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.327 22:11:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.327 22:11:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.327 22:11:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.327 22:11:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.327 22:11:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.327 22:11:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.328 22:11:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.328 22:11:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.328 22:11:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.328 22:11:09 -- paths/export.sh@5 -- # export PATH 00:12:14.328 22:11:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.328 22:11:09 -- nvmf/common.sh@46 -- # : 0 00:12:14.328 22:11:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:14.328 22:11:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:14.328 22:11:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:14.328 22:11:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.328 22:11:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.328 22:11:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:14.328 22:11:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:14.328 22:11:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:14.328 22:11:09 -- target/rpc.sh@11 -- # loops=5 00:12:14.328 22:11:09 -- target/rpc.sh@23 -- # nvmftestinit 00:12:14.328 22:11:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:14.328 22:11:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.328 22:11:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:14.328 22:11:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:14.328 22:11:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:14.328 22:11:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.328 22:11:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.328 22:11:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.328 22:11:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:14.328 22:11:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:14.328 22:11:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:14.328 22:11:09 -- common/autotest_common.sh@10 -- # set +x 00:12:19.614 22:11:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:19.615 22:11:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:19.615 22:11:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:19.615 22:11:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:19.615 22:11:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:19.615 22:11:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:19.615 22:11:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:19.615 22:11:13 -- nvmf/common.sh@294 -- # net_devs=() 00:12:19.615 22:11:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:19.615 22:11:13 -- nvmf/common.sh@295 -- # e810=() 00:12:19.615 22:11:13 -- nvmf/common.sh@295 -- # local -ga e810 00:12:19.615 22:11:13 -- nvmf/common.sh@296 -- # x722=() 00:12:19.615 22:11:13 -- nvmf/common.sh@296 -- # local -ga x722 00:12:19.615 22:11:13 -- nvmf/common.sh@297 -- # mlx=() 00:12:19.615 22:11:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:19.615 22:11:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:19.615 22:11:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:19.615 22:11:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:19.615 22:11:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:19.615 22:11:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:19.615 22:11:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:19.615 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:19.615 22:11:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:19.615 22:11:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:19.615 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:19.615 22:11:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:19.615 22:11:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:19.615 22:11:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.615 22:11:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:19.615 22:11:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.615 22:11:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:19.615 Found net devices under 0000:86:00.0: cvl_0_0 00:12:19.615 22:11:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.615 22:11:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:19.615 22:11:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:19.615 22:11:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:19.615 22:11:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:19.615 22:11:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:19.615 Found net devices under 0000:86:00.1: cvl_0_1 00:12:19.615 22:11:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:19.615 22:11:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:19.615 22:11:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:19.615 22:11:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:19.615 22:11:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:19.615 22:11:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.615 22:11:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.616 22:11:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:19.616 22:11:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:19.616 22:11:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:19.616 22:11:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:19.616 22:11:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:19.616 22:11:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:19.616 22:11:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.616 22:11:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:19.616 22:11:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:19.616 22:11:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:19.616 22:11:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:19.616 22:11:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:19.616 22:11:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:19.616 22:11:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:19.616 22:11:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:19.616 22:11:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:19.616 22:11:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:19.616 22:11:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:19.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:12:19.616 00:12:19.616 --- 10.0.0.2 ping statistics --- 00:12:19.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.616 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:12:19.616 22:11:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:19.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:12:19.616 00:12:19.616 --- 10.0.0.1 ping statistics --- 00:12:19.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.616 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:12:19.616 22:11:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.616 22:11:14 -- nvmf/common.sh@410 -- # return 0 00:12:19.616 22:11:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:19.616 22:11:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.616 22:11:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:19.616 22:11:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:19.616 22:11:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.616 22:11:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:19.616 22:11:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:19.616 22:11:14 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:19.616 22:11:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:19.616 22:11:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:19.616 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:12:19.616 22:11:14 -- nvmf/common.sh@469 -- # nvmfpid=3467299 00:12:19.616 22:11:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:19.616 22:11:14 -- nvmf/common.sh@470 -- # waitforlisten 3467299 00:12:19.616 22:11:14 -- common/autotest_common.sh@819 -- # '[' -z 3467299 ']' 00:12:19.616 22:11:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.616 22:11:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:19.616 22:11:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.616 22:11:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:19.616 22:11:14 -- common/autotest_common.sh@10 -- # set +x 00:12:19.616 [2024-07-24 22:11:14.325442] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:19.616 [2024-07-24 22:11:14.325483] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.616 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.616 [2024-07-24 22:11:14.383634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.616 [2024-07-24 22:11:14.422428] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:19.616 [2024-07-24 22:11:14.422539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.616 [2024-07-24 22:11:14.422546] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.616 [2024-07-24 22:11:14.422553] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.616 [2024-07-24 22:11:14.422643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.616 [2024-07-24 22:11:14.422731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.616 [2024-07-24 22:11:14.422817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.616 [2024-07-24 22:11:14.422818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.187 22:11:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:20.187 22:11:15 -- common/autotest_common.sh@852 -- # return 0 00:12:20.187 22:11:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:20.187 22:11:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:20.187 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.187 22:11:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.187 22:11:15 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:20.187 22:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.187 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.187 22:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.187 22:11:15 -- target/rpc.sh@26 -- # stats='{ 00:12:20.187 "tick_rate": 2300000000, 00:12:20.187 "poll_groups": [ 00:12:20.187 { 00:12:20.187 "name": "nvmf_tgt_poll_group_0", 00:12:20.187 "admin_qpairs": 0, 00:12:20.187 "io_qpairs": 0, 00:12:20.187 "current_admin_qpairs": 0, 00:12:20.187 "current_io_qpairs": 0, 00:12:20.187 "pending_bdev_io": 0, 00:12:20.187 "completed_nvme_io": 0, 00:12:20.187 "transports": [] 00:12:20.187 }, 00:12:20.187 { 00:12:20.187 "name": "nvmf_tgt_poll_group_1", 00:12:20.187 "admin_qpairs": 0, 00:12:20.187 "io_qpairs": 0, 00:12:20.187 "current_admin_qpairs": 0, 00:12:20.187 "current_io_qpairs": 0, 00:12:20.187 "pending_bdev_io": 0, 00:12:20.187 "completed_nvme_io": 0, 00:12:20.187 "transports": [] 00:12:20.187 }, 00:12:20.187 { 00:12:20.187 "name": "nvmf_tgt_poll_group_2", 00:12:20.187 "admin_qpairs": 0, 00:12:20.187 "io_qpairs": 0, 00:12:20.187 "current_admin_qpairs": 0, 00:12:20.187 "current_io_qpairs": 0, 00:12:20.187 "pending_bdev_io": 0, 00:12:20.187 "completed_nvme_io": 0, 00:12:20.187 "transports": [] 00:12:20.187 }, 00:12:20.187 { 00:12:20.187 "name": "nvmf_tgt_poll_group_3", 00:12:20.187 "admin_qpairs": 0, 00:12:20.187 "io_qpairs": 0, 00:12:20.187 "current_admin_qpairs": 0, 00:12:20.187 "current_io_qpairs": 0, 00:12:20.187 "pending_bdev_io": 0, 00:12:20.187 "completed_nvme_io": 0, 00:12:20.187 "transports": [] 00:12:20.187 } 00:12:20.187 ] 00:12:20.187 }' 00:12:20.187 22:11:15 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:20.187 22:11:15 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:20.187 22:11:15 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:20.187 22:11:15 -- target/rpc.sh@15 -- # wc -l 00:12:20.187 22:11:15 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:20.187 22:11:15 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:20.187 22:11:15 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:20.187 22:11:15 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.187 22:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.187 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.187 [2024-07-24 22:11:15.272831] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.188 22:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.188 22:11:15 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:20.188 22:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.188 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.188 22:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.188 22:11:15 -- target/rpc.sh@33 -- # stats='{ 00:12:20.188 "tick_rate": 2300000000, 00:12:20.188 "poll_groups": [ 00:12:20.188 { 00:12:20.188 "name": "nvmf_tgt_poll_group_0", 00:12:20.188 "admin_qpairs": 0, 00:12:20.188 "io_qpairs": 0, 00:12:20.188 "current_admin_qpairs": 0, 00:12:20.188 "current_io_qpairs": 0, 00:12:20.188 "pending_bdev_io": 0, 00:12:20.188 "completed_nvme_io": 0, 00:12:20.188 "transports": [ 00:12:20.188 { 00:12:20.188 "trtype": "TCP" 00:12:20.188 } 00:12:20.188 ] 00:12:20.188 }, 00:12:20.188 { 00:12:20.188 "name": "nvmf_tgt_poll_group_1", 00:12:20.188 "admin_qpairs": 0, 00:12:20.188 "io_qpairs": 0, 00:12:20.188 "current_admin_qpairs": 0, 00:12:20.188 "current_io_qpairs": 0, 00:12:20.188 "pending_bdev_io": 0, 00:12:20.188 "completed_nvme_io": 0, 00:12:20.188 "transports": [ 00:12:20.188 { 00:12:20.188 "trtype": "TCP" 00:12:20.188 } 00:12:20.188 ] 00:12:20.188 }, 00:12:20.188 { 00:12:20.188 "name": "nvmf_tgt_poll_group_2", 00:12:20.188 "admin_qpairs": 0, 00:12:20.188 "io_qpairs": 0, 00:12:20.188 "current_admin_qpairs": 0, 00:12:20.188 "current_io_qpairs": 0, 00:12:20.188 "pending_bdev_io": 0, 00:12:20.188 "completed_nvme_io": 0, 00:12:20.188 "transports": [ 00:12:20.188 { 00:12:20.188 "trtype": "TCP" 00:12:20.188 } 00:12:20.188 ] 00:12:20.188 }, 00:12:20.188 { 00:12:20.188 "name": "nvmf_tgt_poll_group_3", 00:12:20.188 "admin_qpairs": 0, 00:12:20.188 "io_qpairs": 0, 00:12:20.188 "current_admin_qpairs": 0, 00:12:20.188 "current_io_qpairs": 0, 00:12:20.188 "pending_bdev_io": 0, 00:12:20.188 "completed_nvme_io": 0, 00:12:20.188 "transports": [ 00:12:20.188 { 00:12:20.188 "trtype": "TCP" 00:12:20.188 } 00:12:20.188 ] 00:12:20.188 } 00:12:20.188 ] 00:12:20.188 }' 00:12:20.188 22:11:15 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:20.188 22:11:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:20.188 22:11:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:20.188 22:11:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:20.448 22:11:15 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:20.448 22:11:15 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:20.449 22:11:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:20.449 22:11:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:20.449 22:11:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:20.449 22:11:15 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:20.449 22:11:15 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:20.449 22:11:15 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:20.449 22:11:15 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:20.449 22:11:15 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:20.449 22:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.449 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.449 Malloc1 00:12:20.449 22:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.449 22:11:15 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.449 22:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.449 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.449 22:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.449 22:11:15 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:20.449 22:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.449 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.449 22:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.449 22:11:15 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:20.449 22:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.449 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.449 22:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.449 22:11:15 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.449 22:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.449 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.449 [2024-07-24 22:11:15.444922] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.449 22:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.449 22:11:15 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:20.449 22:11:15 -- common/autotest_common.sh@640 -- # local es=0 00:12:20.449 22:11:15 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:20.449 22:11:15 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:20.449 22:11:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:20.449 22:11:15 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:20.449 22:11:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:20.449 22:11:15 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:20.449 22:11:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:20.449 22:11:15 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:20.449 22:11:15 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:20.449 22:11:15 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:20.449 [2024-07-24 22:11:15.469694] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:20.449 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:20.449 could not add new controller: failed to write to nvme-fabrics device 00:12:20.449 22:11:15 -- common/autotest_common.sh@643 -- # es=1 00:12:20.449 22:11:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:20.449 22:11:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:20.449 22:11:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:20.449 22:11:15 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:20.449 22:11:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.449 22:11:15 -- common/autotest_common.sh@10 -- # set +x 00:12:20.449 22:11:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.449 22:11:15 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.830 22:11:16 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.830 22:11:16 -- common/autotest_common.sh@1177 -- # local i=0 00:12:21.830 22:11:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.830 22:11:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:21.830 22:11:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:23.740 22:11:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:23.740 22:11:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:23.740 22:11:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.740 22:11:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:23.740 22:11:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.740 22:11:18 -- common/autotest_common.sh@1187 -- # return 0 00:12:23.740 22:11:18 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.740 22:11:18 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.740 22:11:18 -- common/autotest_common.sh@1198 -- # local i=0 00:12:23.740 22:11:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:23.740 22:11:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.740 22:11:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:23.740 22:11:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.740 22:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:12:23.740 22:11:18 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:23.740 22:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.740 22:11:18 -- common/autotest_common.sh@10 -- # set +x 00:12:23.740 22:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.740 22:11:18 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.740 22:11:18 -- common/autotest_common.sh@640 -- # local es=0 00:12:23.740 22:11:18 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.740 22:11:18 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:23.740 22:11:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:23.740 22:11:18 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:23.740 22:11:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:23.740 22:11:18 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:23.740 22:11:18 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:23.740 22:11:18 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:23.740 22:11:18 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:23.740 22:11:18 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.740 [2024-07-24 22:11:18.784161] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:23.740 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:23.740 could not add new controller: failed to write to nvme-fabrics device 00:12:23.740 22:11:18 -- common/autotest_common.sh@643 -- # es=1 00:12:23.740 22:11:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:23.740 22:11:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:23.740 22:11:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:23.740 22:11:18 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:23.740 22:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.740 22:11:18 -- common/autotest_common.sh@10 -- # set +x 00:12:23.740 22:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.740 22:11:18 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:25.123 22:11:19 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:25.123 22:11:19 -- common/autotest_common.sh@1177 -- # local i=0 00:12:25.123 22:11:19 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.123 22:11:19 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:25.123 22:11:19 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:27.035 22:11:21 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:27.035 22:11:21 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:27.035 22:11:21 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.035 22:11:21 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:27.035 22:11:21 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.035 22:11:21 -- common/autotest_common.sh@1187 -- # return 0 00:12:27.035 22:11:21 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.035 22:11:22 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.035 22:11:22 -- common/autotest_common.sh@1198 -- # local i=0 00:12:27.035 22:11:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:27.035 22:11:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.035 22:11:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:27.035 22:11:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.035 22:11:22 -- common/autotest_common.sh@1210 -- # return 0 00:12:27.035 22:11:22 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:27.035 22:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.035 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:12:27.035 22:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.035 22:11:22 -- target/rpc.sh@81 -- # seq 1 5 00:12:27.035 22:11:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:27.035 22:11:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:27.035 22:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.035 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:12:27.035 22:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.035 22:11:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.035 22:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.035 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:12:27.035 [2024-07-24 22:11:22.120737] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.035 22:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.035 22:11:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:27.035 22:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.035 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:12:27.035 22:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.035 22:11:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:27.035 22:11:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:27.035 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:12:27.035 22:11:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:27.035 22:11:22 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.448 22:11:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.448 22:11:23 -- common/autotest_common.sh@1177 -- # local i=0 00:12:28.448 22:11:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.448 22:11:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:28.448 22:11:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:30.356 22:11:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:30.356 22:11:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:30.356 22:11:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.356 22:11:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:30.356 22:11:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.356 22:11:25 -- common/autotest_common.sh@1187 -- # return 0 00:12:30.356 22:11:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:30.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.356 22:11:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:30.356 22:11:25 -- common/autotest_common.sh@1198 -- # local i=0 00:12:30.356 22:11:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:30.356 22:11:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.356 22:11:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:30.356 22:11:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:30.356 22:11:25 -- common/autotest_common.sh@1210 -- # return 0 00:12:30.356 22:11:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.356 22:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.356 22:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:30.356 22:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.356 22:11:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:30.356 22:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.356 22:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:30.356 22:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.356 22:11:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:30.356 22:11:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:30.356 22:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.356 22:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:30.356 22:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.356 22:11:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.356 22:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.356 22:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:30.356 [2024-07-24 22:11:25.366789] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.356 22:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.356 22:11:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:30.356 22:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.356 22:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:30.356 22:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.356 22:11:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:30.356 22:11:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:30.356 22:11:25 -- common/autotest_common.sh@10 -- # set +x 00:12:30.356 22:11:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:30.356 22:11:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.738 22:11:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.738 22:11:26 -- common/autotest_common.sh@1177 -- # local i=0 00:12:31.738 22:11:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.738 22:11:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:31.738 22:11:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:33.648 22:11:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:33.648 22:11:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:33.648 22:11:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.648 22:11:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:33.648 22:11:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.648 22:11:28 -- common/autotest_common.sh@1187 -- # return 0 00:12:33.648 22:11:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.648 22:11:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.648 22:11:28 -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.648 22:11:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:33.648 22:11:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.648 22:11:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:33.648 22:11:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.648 22:11:28 -- common/autotest_common.sh@1210 -- # return 0 00:12:33.648 22:11:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.648 22:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.648 22:11:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.648 22:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.648 22:11:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.648 22:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.648 22:11:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.648 22:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.648 22:11:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.648 22:11:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.648 22:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.648 22:11:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.648 22:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.648 22:11:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.648 22:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.648 22:11:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.648 [2024-07-24 22:11:28.647651] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.648 22:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.648 22:11:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.648 22:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.648 22:11:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.648 22:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.648 22:11:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.648 22:11:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.648 22:11:28 -- common/autotest_common.sh@10 -- # set +x 00:12:33.648 22:11:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.648 22:11:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.031 22:11:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.031 22:11:29 -- common/autotest_common.sh@1177 -- # local i=0 00:12:35.031 22:11:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.031 22:11:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:35.031 22:11:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:36.942 22:11:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:36.942 22:11:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:36.942 22:11:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.942 22:11:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:36.942 22:11:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.942 22:11:31 -- common/autotest_common.sh@1187 -- # return 0 00:12:36.942 22:11:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.942 22:11:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.942 22:11:31 -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.942 22:11:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:36.942 22:11:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.942 22:11:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:36.942 22:11:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.942 22:11:31 -- common/autotest_common.sh@1210 -- # return 0 00:12:36.942 22:11:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.942 22:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.942 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 22:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.942 22:11:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.942 22:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.942 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 22:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.942 22:11:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.942 22:11:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.942 22:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.942 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 22:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.942 22:11:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.942 22:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.942 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 [2024-07-24 22:11:31.965282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.942 22:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.942 22:11:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.942 22:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.942 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 22:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.942 22:11:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.942 22:11:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:36.942 22:11:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 22:11:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:36.942 22:11:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.324 22:11:33 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.324 22:11:33 -- common/autotest_common.sh@1177 -- # local i=0 00:12:38.324 22:11:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.324 22:11:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:38.324 22:11:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:40.235 22:11:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:40.235 22:11:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:40.235 22:11:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.235 22:11:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:40.235 22:11:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.235 22:11:35 -- common/autotest_common.sh@1187 -- # return 0 00:12:40.235 22:11:35 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.235 22:11:35 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.235 22:11:35 -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.235 22:11:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:40.235 22:11:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.235 22:11:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:40.235 22:11:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.235 22:11:35 -- common/autotest_common.sh@1210 -- # return 0 00:12:40.235 22:11:35 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.235 22:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.235 22:11:35 -- common/autotest_common.sh@10 -- # set +x 00:12:40.235 22:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.235 22:11:35 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.235 22:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.235 22:11:35 -- common/autotest_common.sh@10 -- # set +x 00:12:40.235 22:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.235 22:11:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.235 22:11:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.235 22:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.235 22:11:35 -- common/autotest_common.sh@10 -- # set +x 00:12:40.235 22:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.235 22:11:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.235 22:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.235 22:11:35 -- common/autotest_common.sh@10 -- # set +x 00:12:40.235 [2024-07-24 22:11:35.353715] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.235 22:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.235 22:11:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.235 22:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.235 22:11:35 -- common/autotest_common.sh@10 -- # set +x 00:12:40.235 22:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.235 22:11:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.235 22:11:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.235 22:11:35 -- common/autotest_common.sh@10 -- # set +x 00:12:40.495 22:11:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.495 22:11:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.435 22:11:36 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.435 22:11:36 -- common/autotest_common.sh@1177 -- # local i=0 00:12:41.435 22:11:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.435 22:11:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:41.435 22:11:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:43.978 22:11:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:43.978 22:11:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:43.978 22:11:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.978 22:11:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:43.978 22:11:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.978 22:11:38 -- common/autotest_common.sh@1187 -- # return 0 00:12:43.978 22:11:38 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.978 22:11:38 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.978 22:11:38 -- common/autotest_common.sh@1198 -- # local i=0 00:12:43.978 22:11:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:43.978 22:11:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.978 22:11:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:43.978 22:11:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.978 22:11:38 -- common/autotest_common.sh@1210 -- # return 0 00:12:43.978 22:11:38 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:43.978 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.978 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.978 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.978 22:11:38 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.978 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.978 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.978 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.978 22:11:38 -- target/rpc.sh@99 -- # seq 1 5 00:12:43.978 22:11:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.978 22:11:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.978 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.978 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.978 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.978 22:11:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.978 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.978 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.978 [2024-07-24 22:11:38.703889] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.978 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.978 22:11:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.978 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.978 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.978 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.979 22:11:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 [2024-07-24 22:11:38.751987] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.979 22:11:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 [2024-07-24 22:11:38.800139] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.979 22:11:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 [2024-07-24 22:11:38.852320] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:43.979 22:11:38 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 [2024-07-24 22:11:38.900471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:43.979 22:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.979 22:11:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 22:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.979 22:11:38 -- target/rpc.sh@110 -- # stats='{ 00:12:43.979 "tick_rate": 2300000000, 00:12:43.979 "poll_groups": [ 00:12:43.979 { 00:12:43.979 "name": "nvmf_tgt_poll_group_0", 00:12:43.979 "admin_qpairs": 2, 00:12:43.979 "io_qpairs": 168, 00:12:43.979 "current_admin_qpairs": 0, 00:12:43.979 "current_io_qpairs": 0, 00:12:43.979 "pending_bdev_io": 0, 00:12:43.979 "completed_nvme_io": 280, 00:12:43.979 "transports": [ 00:12:43.979 { 00:12:43.979 "trtype": "TCP" 00:12:43.979 } 00:12:43.979 ] 00:12:43.979 }, 00:12:43.979 { 00:12:43.979 "name": "nvmf_tgt_poll_group_1", 00:12:43.979 "admin_qpairs": 2, 00:12:43.979 "io_qpairs": 168, 00:12:43.979 "current_admin_qpairs": 0, 00:12:43.979 "current_io_qpairs": 0, 00:12:43.979 "pending_bdev_io": 0, 00:12:43.979 "completed_nvme_io": 168, 00:12:43.979 "transports": [ 00:12:43.979 { 00:12:43.979 "trtype": "TCP" 00:12:43.979 } 00:12:43.979 ] 00:12:43.979 }, 00:12:43.979 { 00:12:43.979 "name": "nvmf_tgt_poll_group_2", 00:12:43.979 "admin_qpairs": 1, 00:12:43.979 "io_qpairs": 168, 00:12:43.979 "current_admin_qpairs": 0, 00:12:43.979 "current_io_qpairs": 0, 00:12:43.979 "pending_bdev_io": 0, 00:12:43.979 "completed_nvme_io": 269, 00:12:43.979 "transports": [ 00:12:43.979 { 00:12:43.979 "trtype": "TCP" 00:12:43.979 } 00:12:43.979 ] 00:12:43.979 }, 00:12:43.979 { 00:12:43.980 "name": "nvmf_tgt_poll_group_3", 00:12:43.980 "admin_qpairs": 2, 00:12:43.980 "io_qpairs": 168, 00:12:43.980 "current_admin_qpairs": 0, 00:12:43.980 "current_io_qpairs": 0, 00:12:43.980 "pending_bdev_io": 0, 00:12:43.980 "completed_nvme_io": 305, 00:12:43.980 "transports": [ 00:12:43.980 { 00:12:43.980 "trtype": "TCP" 00:12:43.980 } 00:12:43.980 ] 00:12:43.980 } 00:12:43.980 ] 00:12:43.980 }' 00:12:43.980 22:11:38 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:43.980 22:11:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:43.980 22:11:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:43.980 22:11:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.980 22:11:39 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:43.980 22:11:39 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:43.980 22:11:39 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:43.980 22:11:39 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:43.980 22:11:39 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.980 22:11:39 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:43.980 22:11:39 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:43.980 22:11:39 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:43.980 22:11:39 -- target/rpc.sh@123 -- # nvmftestfini 00:12:43.980 22:11:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:43.980 22:11:39 -- nvmf/common.sh@116 -- # sync 00:12:43.980 22:11:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:43.980 22:11:39 -- nvmf/common.sh@119 -- # set +e 00:12:43.980 22:11:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:43.980 22:11:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:43.980 rmmod nvme_tcp 00:12:43.980 rmmod nvme_fabrics 00:12:43.980 rmmod nvme_keyring 00:12:43.980 22:11:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:43.980 22:11:39 -- nvmf/common.sh@123 -- # set -e 00:12:43.980 22:11:39 -- nvmf/common.sh@124 -- # return 0 00:12:43.980 22:11:39 -- nvmf/common.sh@477 -- # '[' -n 3467299 ']' 00:12:43.980 22:11:39 -- nvmf/common.sh@478 -- # killprocess 3467299 00:12:43.980 22:11:39 -- common/autotest_common.sh@926 -- # '[' -z 3467299 ']' 00:12:43.980 22:11:39 -- common/autotest_common.sh@930 -- # kill -0 3467299 00:12:43.980 22:11:39 -- common/autotest_common.sh@931 -- # uname 00:12:43.980 22:11:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:43.980 22:11:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3467299 00:12:44.239 22:11:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:44.239 22:11:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:44.239 22:11:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3467299' 00:12:44.239 killing process with pid 3467299 00:12:44.239 22:11:39 -- common/autotest_common.sh@945 -- # kill 3467299 00:12:44.239 22:11:39 -- common/autotest_common.sh@950 -- # wait 3467299 00:12:44.239 22:11:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:44.239 22:11:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:44.239 22:11:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:44.239 22:11:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.239 22:11:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:44.239 22:11:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.239 22:11:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.239 22:11:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.781 22:11:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:46.781 00:12:46.781 real 0m32.426s 00:12:46.781 user 1m40.741s 00:12:46.781 sys 0m5.478s 00:12:46.781 22:11:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:46.781 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:12:46.781 ************************************ 00:12:46.781 END TEST nvmf_rpc 00:12:46.781 ************************************ 00:12:46.781 22:11:41 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.781 22:11:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:46.781 22:11:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:46.781 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:12:46.781 ************************************ 00:12:46.781 START TEST nvmf_invalid 00:12:46.781 ************************************ 00:12:46.781 22:11:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:46.781 * Looking for test storage... 00:12:46.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.781 22:11:41 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.781 22:11:41 -- nvmf/common.sh@7 -- # uname -s 00:12:46.781 22:11:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.781 22:11:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.781 22:11:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.781 22:11:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.781 22:11:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.781 22:11:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.781 22:11:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.781 22:11:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.781 22:11:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.781 22:11:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.781 22:11:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.781 22:11:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:46.781 22:11:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.781 22:11:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.781 22:11:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.781 22:11:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.781 22:11:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.781 22:11:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.781 22:11:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.781 22:11:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.781 22:11:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.781 22:11:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.781 22:11:41 -- paths/export.sh@5 -- # export PATH 00:12:46.781 22:11:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.781 22:11:41 -- nvmf/common.sh@46 -- # : 0 00:12:46.781 22:11:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:46.781 22:11:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:46.781 22:11:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:46.781 22:11:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.781 22:11:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.781 22:11:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:46.781 22:11:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:46.781 22:11:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:46.781 22:11:41 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.781 22:11:41 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:46.781 22:11:41 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:46.781 22:11:41 -- target/invalid.sh@14 -- # target=foobar 00:12:46.781 22:11:41 -- target/invalid.sh@16 -- # RANDOM=0 00:12:46.781 22:11:41 -- target/invalid.sh@34 -- # nvmftestinit 00:12:46.781 22:11:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:46.781 22:11:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.781 22:11:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:46.781 22:11:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:46.781 22:11:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:46.782 22:11:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.782 22:11:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.782 22:11:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.782 22:11:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:46.782 22:11:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:46.782 22:11:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:46.782 22:11:41 -- common/autotest_common.sh@10 -- # set +x 00:12:52.097 22:11:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:52.097 22:11:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:52.097 22:11:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:52.097 22:11:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:52.097 22:11:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:52.097 22:11:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:52.097 22:11:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:52.097 22:11:46 -- nvmf/common.sh@294 -- # net_devs=() 00:12:52.097 22:11:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:52.097 22:11:46 -- nvmf/common.sh@295 -- # e810=() 00:12:52.097 22:11:46 -- nvmf/common.sh@295 -- # local -ga e810 00:12:52.097 22:11:46 -- nvmf/common.sh@296 -- # x722=() 00:12:52.097 22:11:46 -- nvmf/common.sh@296 -- # local -ga x722 00:12:52.097 22:11:46 -- nvmf/common.sh@297 -- # mlx=() 00:12:52.097 22:11:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:52.097 22:11:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.097 22:11:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:52.097 22:11:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:52.097 22:11:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:52.097 22:11:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:52.097 22:11:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:52.097 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:52.097 22:11:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:52.097 22:11:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:52.097 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:52.097 22:11:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:52.097 22:11:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:52.097 22:11:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.097 22:11:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:52.097 22:11:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.097 22:11:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:52.097 Found net devices under 0000:86:00.0: cvl_0_0 00:12:52.097 22:11:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.097 22:11:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:52.097 22:11:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.097 22:11:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:52.097 22:11:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.097 22:11:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:52.097 Found net devices under 0000:86:00.1: cvl_0_1 00:12:52.097 22:11:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.097 22:11:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:52.097 22:11:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:52.097 22:11:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:52.097 22:11:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.097 22:11:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.097 22:11:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.097 22:11:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:52.097 22:11:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.097 22:11:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.097 22:11:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:52.097 22:11:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.097 22:11:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.097 22:11:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:52.097 22:11:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:52.097 22:11:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.097 22:11:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.097 22:11:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.097 22:11:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.097 22:11:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:52.097 22:11:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.097 22:11:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.097 22:11:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.097 22:11:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:52.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:12:52.097 00:12:52.097 --- 10.0.0.2 ping statistics --- 00:12:52.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.097 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:12:52.097 22:11:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:12:52.097 00:12:52.097 --- 10.0.0.1 ping statistics --- 00:12:52.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.097 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:12:52.097 22:11:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.097 22:11:46 -- nvmf/common.sh@410 -- # return 0 00:12:52.097 22:11:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:52.097 22:11:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.097 22:11:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:52.097 22:11:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.097 22:11:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:52.097 22:11:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:52.097 22:11:46 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:52.097 22:11:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:52.097 22:11:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:52.097 22:11:46 -- common/autotest_common.sh@10 -- # set +x 00:12:52.097 22:11:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.097 22:11:46 -- nvmf/common.sh@469 -- # nvmfpid=3474995 00:12:52.097 22:11:46 -- nvmf/common.sh@470 -- # waitforlisten 3474995 00:12:52.097 22:11:46 -- common/autotest_common.sh@819 -- # '[' -z 3474995 ']' 00:12:52.097 22:11:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.097 22:11:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:52.097 22:11:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.097 22:11:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:52.097 22:11:46 -- common/autotest_common.sh@10 -- # set +x 00:12:52.097 [2024-07-24 22:11:46.869979] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:12:52.097 [2024-07-24 22:11:46.870019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.097 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.097 [2024-07-24 22:11:46.923472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.097 [2024-07-24 22:11:46.963802] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:52.097 [2024-07-24 22:11:46.963915] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.097 [2024-07-24 22:11:46.963923] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.097 [2024-07-24 22:11:46.963929] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.098 [2024-07-24 22:11:46.963964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.098 [2024-07-24 22:11:46.964072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.098 [2024-07-24 22:11:46.964113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.098 [2024-07-24 22:11:46.964115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.668 22:11:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:52.668 22:11:47 -- common/autotest_common.sh@852 -- # return 0 00:12:52.668 22:11:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:52.668 22:11:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:52.668 22:11:47 -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 22:11:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.668 22:11:47 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:52.668 22:11:47 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10186 00:12:52.928 [2024-07-24 22:11:47.893021] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:52.928 22:11:47 -- target/invalid.sh@40 -- # out='request: 00:12:52.928 { 00:12:52.928 "nqn": "nqn.2016-06.io.spdk:cnode10186", 00:12:52.928 "tgt_name": "foobar", 00:12:52.928 "method": "nvmf_create_subsystem", 00:12:52.928 "req_id": 1 00:12:52.928 } 00:12:52.928 Got JSON-RPC error response 00:12:52.928 response: 00:12:52.928 { 00:12:52.928 "code": -32603, 00:12:52.928 "message": "Unable to find target foobar" 00:12:52.928 }' 00:12:52.928 22:11:47 -- target/invalid.sh@41 -- # [[ request: 00:12:52.928 { 00:12:52.928 "nqn": "nqn.2016-06.io.spdk:cnode10186", 00:12:52.928 "tgt_name": "foobar", 00:12:52.928 "method": "nvmf_create_subsystem", 00:12:52.928 "req_id": 1 00:12:52.928 } 00:12:52.928 Got JSON-RPC error response 00:12:52.928 response: 00:12:52.928 { 00:12:52.928 "code": -32603, 00:12:52.928 "message": "Unable to find target foobar" 00:12:52.928 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:52.928 22:11:47 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:52.928 22:11:47 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4639 00:12:53.188 [2024-07-24 22:11:48.077675] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4639: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:53.188 22:11:48 -- target/invalid.sh@45 -- # out='request: 00:12:53.188 { 00:12:53.188 "nqn": "nqn.2016-06.io.spdk:cnode4639", 00:12:53.188 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:53.188 "method": "nvmf_create_subsystem", 00:12:53.188 "req_id": 1 00:12:53.189 } 00:12:53.189 Got JSON-RPC error response 00:12:53.189 response: 00:12:53.189 { 00:12:53.189 "code": -32602, 00:12:53.189 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:53.189 }' 00:12:53.189 22:11:48 -- target/invalid.sh@46 -- # [[ request: 00:12:53.189 { 00:12:53.189 "nqn": "nqn.2016-06.io.spdk:cnode4639", 00:12:53.189 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:53.189 "method": "nvmf_create_subsystem", 00:12:53.189 "req_id": 1 00:12:53.189 } 00:12:53.189 Got JSON-RPC error response 00:12:53.189 response: 00:12:53.189 { 00:12:53.189 "code": -32602, 00:12:53.189 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:53.189 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:53.189 22:11:48 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:53.189 22:11:48 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode29798 00:12:53.189 [2024-07-24 22:11:48.258241] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29798: invalid model number 'SPDK_Controller' 00:12:53.189 22:11:48 -- target/invalid.sh@50 -- # out='request: 00:12:53.189 { 00:12:53.189 "nqn": "nqn.2016-06.io.spdk:cnode29798", 00:12:53.189 "model_number": "SPDK_Controller\u001f", 00:12:53.189 "method": "nvmf_create_subsystem", 00:12:53.189 "req_id": 1 00:12:53.189 } 00:12:53.189 Got JSON-RPC error response 00:12:53.189 response: 00:12:53.189 { 00:12:53.189 "code": -32602, 00:12:53.189 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.189 }' 00:12:53.189 22:11:48 -- target/invalid.sh@51 -- # [[ request: 00:12:53.189 { 00:12:53.189 "nqn": "nqn.2016-06.io.spdk:cnode29798", 00:12:53.189 "model_number": "SPDK_Controller\u001f", 00:12:53.189 "method": "nvmf_create_subsystem", 00:12:53.189 "req_id": 1 00:12:53.189 } 00:12:53.189 Got JSON-RPC error response 00:12:53.189 response: 00:12:53.189 { 00:12:53.189 "code": -32602, 00:12:53.189 "message": "Invalid MN SPDK_Controller\u001f" 00:12:53.189 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:53.189 22:11:48 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:53.189 22:11:48 -- target/invalid.sh@19 -- # local length=21 ll 00:12:53.189 22:11:48 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:53.189 22:11:48 -- target/invalid.sh@21 -- # local chars 00:12:53.189 22:11:48 -- target/invalid.sh@22 -- # local string 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # printf %x 118 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # string+=v 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # printf %x 71 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x47' 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # string+=G 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # printf %x 96 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # string+='`' 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # printf %x 117 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # string+=u 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.189 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # printf %x 99 00:12:53.189 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+=c 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 61 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+== 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 97 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+=a 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 99 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+=c 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 119 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+=w 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 104 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+=h 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 116 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+=t 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 102 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+=f 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 121 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+=y 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 123 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # string+='{' 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.449 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.449 22:11:48 -- target/invalid.sh@25 -- # printf %x 83 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # string+=S 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # printf %x 101 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # string+=e 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # printf %x 68 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # string+=D 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # printf %x 95 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # string+=_ 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # printf %x 87 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # string+=W 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # printf %x 73 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # string+=I 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # printf %x 40 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:53.450 22:11:48 -- target/invalid.sh@25 -- # string+='(' 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.450 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.450 22:11:48 -- target/invalid.sh@28 -- # [[ v == \- ]] 00:12:53.450 22:11:48 -- target/invalid.sh@31 -- # echo 'vG`uc=acwhtfy{SeD_WI(' 00:12:53.450 22:11:48 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'vG`uc=acwhtfy{SeD_WI(' nqn.2016-06.io.spdk:cnode4843 00:12:53.450 [2024-07-24 22:11:48.567275] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4843: invalid serial number 'vG`uc=acwhtfy{SeD_WI(' 00:12:53.710 22:11:48 -- target/invalid.sh@54 -- # out='request: 00:12:53.710 { 00:12:53.710 "nqn": "nqn.2016-06.io.spdk:cnode4843", 00:12:53.710 "serial_number": "vG`uc=acwhtfy{SeD_WI(", 00:12:53.710 "method": "nvmf_create_subsystem", 00:12:53.710 "req_id": 1 00:12:53.710 } 00:12:53.710 Got JSON-RPC error response 00:12:53.710 response: 00:12:53.710 { 00:12:53.710 "code": -32602, 00:12:53.710 "message": "Invalid SN vG`uc=acwhtfy{SeD_WI(" 00:12:53.711 }' 00:12:53.711 22:11:48 -- target/invalid.sh@55 -- # [[ request: 00:12:53.711 { 00:12:53.711 "nqn": "nqn.2016-06.io.spdk:cnode4843", 00:12:53.711 "serial_number": "vG`uc=acwhtfy{SeD_WI(", 00:12:53.711 "method": "nvmf_create_subsystem", 00:12:53.711 "req_id": 1 00:12:53.711 } 00:12:53.711 Got JSON-RPC error response 00:12:53.711 response: 00:12:53.711 { 00:12:53.711 "code": -32602, 00:12:53.711 "message": "Invalid SN vG`uc=acwhtfy{SeD_WI(" 00:12:53.711 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:53.711 22:11:48 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:53.711 22:11:48 -- target/invalid.sh@19 -- # local length=41 ll 00:12:53.711 22:11:48 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:53.711 22:11:48 -- target/invalid.sh@21 -- # local chars 00:12:53.711 22:11:48 -- target/invalid.sh@22 -- # local string 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 86 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=V 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 39 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=\' 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 106 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=j 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 45 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=- 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 48 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=0 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 82 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=R 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 80 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=P 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 84 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=T 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 46 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=. 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 34 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+='"' 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 92 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+='\' 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 102 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=f 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 105 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x69' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=i 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 116 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=t 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 65 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=A 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 86 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=V 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 52 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=4 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 54 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=6 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 80 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=P 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 47 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=/ 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 100 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=d 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 62 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+='>' 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 98 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+=b 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 62 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+='>' 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 60 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+='<' 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # printf %x 34 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:53.711 22:11:48 -- target/invalid.sh@25 -- # string+='"' 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.711 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 95 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+=_ 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 99 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+=c 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 38 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+='&' 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 101 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+=e 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 85 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+=U 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 52 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+=4 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 96 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+='`' 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 89 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+=Y 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 34 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+='"' 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 101 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+=e 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 34 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+='"' 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 65 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+=A 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 94 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # string+='^' 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.712 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.712 22:11:48 -- target/invalid.sh@25 -- # printf %x 43 00:12:53.972 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:53.972 22:11:48 -- target/invalid.sh@25 -- # string+=+ 00:12:53.972 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.972 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.972 22:11:48 -- target/invalid.sh@25 -- # printf %x 79 00:12:53.972 22:11:48 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:53.972 22:11:48 -- target/invalid.sh@25 -- # string+=O 00:12:53.972 22:11:48 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:53.972 22:11:48 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:53.972 22:11:48 -- target/invalid.sh@28 -- # [[ V == \- ]] 00:12:53.972 22:11:48 -- target/invalid.sh@31 -- # echo 'V'\''j-0RPT."\fitAV46P/d>b><"_c&eU4`Y"e"A^+O' 00:12:53.972 22:11:48 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'V'\''j-0RPT."\fitAV46P/d>b><"_c&eU4`Y"e"A^+O' nqn.2016-06.io.spdk:cnode26415 00:12:53.972 [2024-07-24 22:11:49.012776] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26415: invalid model number 'V'j-0RPT."\fitAV46P/d>b><"_c&eU4`Y"e"A^+O' 00:12:53.972 22:11:49 -- target/invalid.sh@58 -- # out='request: 00:12:53.972 { 00:12:53.972 "nqn": "nqn.2016-06.io.spdk:cnode26415", 00:12:53.972 "model_number": "V'\''j-0RPT.\"\\fitAV46P/d>b><\"_c&eU4`Y\"e\"A^+O", 00:12:53.972 "method": "nvmf_create_subsystem", 00:12:53.972 "req_id": 1 00:12:53.972 } 00:12:53.972 Got JSON-RPC error response 00:12:53.972 response: 00:12:53.972 { 00:12:53.972 "code": -32602, 00:12:53.972 "message": "Invalid MN V'\''j-0RPT.\"\\fitAV46P/d>b><\"_c&eU4`Y\"e\"A^+O" 00:12:53.972 }' 00:12:53.972 22:11:49 -- target/invalid.sh@59 -- # [[ request: 00:12:53.972 { 00:12:53.972 "nqn": "nqn.2016-06.io.spdk:cnode26415", 00:12:53.972 "model_number": "V'j-0RPT.\"\\fitAV46P/d>b><\"_c&eU4`Y\"e\"A^+O", 00:12:53.972 "method": "nvmf_create_subsystem", 00:12:53.972 "req_id": 1 00:12:53.972 } 00:12:53.972 Got JSON-RPC error response 00:12:53.972 response: 00:12:53.972 { 00:12:53.972 "code": -32602, 00:12:53.972 "message": "Invalid MN V'j-0RPT.\"\\fitAV46P/d>b><\"_c&eU4`Y\"e\"A^+O" 00:12:53.972 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:53.972 22:11:49 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:54.233 [2024-07-24 22:11:49.201502] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.233 22:11:49 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:54.493 22:11:49 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:54.493 22:11:49 -- target/invalid.sh@67 -- # echo '' 00:12:54.493 22:11:49 -- target/invalid.sh@67 -- # head -n 1 00:12:54.493 22:11:49 -- target/invalid.sh@67 -- # IP= 00:12:54.493 22:11:49 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:54.493 [2024-07-24 22:11:49.566787] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:54.493 22:11:49 -- target/invalid.sh@69 -- # out='request: 00:12:54.493 { 00:12:54.493 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:54.493 "listen_address": { 00:12:54.493 "trtype": "tcp", 00:12:54.493 "traddr": "", 00:12:54.493 "trsvcid": "4421" 00:12:54.493 }, 00:12:54.493 "method": "nvmf_subsystem_remove_listener", 00:12:54.493 "req_id": 1 00:12:54.493 } 00:12:54.493 Got JSON-RPC error response 00:12:54.493 response: 00:12:54.493 { 00:12:54.493 "code": -32602, 00:12:54.493 "message": "Invalid parameters" 00:12:54.493 }' 00:12:54.493 22:11:49 -- target/invalid.sh@70 -- # [[ request: 00:12:54.493 { 00:12:54.493 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:54.493 "listen_address": { 00:12:54.493 "trtype": "tcp", 00:12:54.493 "traddr": "", 00:12:54.493 "trsvcid": "4421" 00:12:54.493 }, 00:12:54.493 "method": "nvmf_subsystem_remove_listener", 00:12:54.493 "req_id": 1 00:12:54.493 } 00:12:54.493 Got JSON-RPC error response 00:12:54.493 response: 00:12:54.493 { 00:12:54.493 "code": -32602, 00:12:54.493 "message": "Invalid parameters" 00:12:54.493 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:54.493 22:11:49 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23428 -i 0 00:12:54.753 [2024-07-24 22:11:49.743352] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23428: invalid cntlid range [0-65519] 00:12:54.753 22:11:49 -- target/invalid.sh@73 -- # out='request: 00:12:54.753 { 00:12:54.753 "nqn": "nqn.2016-06.io.spdk:cnode23428", 00:12:54.753 "min_cntlid": 0, 00:12:54.753 "method": "nvmf_create_subsystem", 00:12:54.753 "req_id": 1 00:12:54.753 } 00:12:54.753 Got JSON-RPC error response 00:12:54.753 response: 00:12:54.753 { 00:12:54.753 "code": -32602, 00:12:54.753 "message": "Invalid cntlid range [0-65519]" 00:12:54.753 }' 00:12:54.753 22:11:49 -- target/invalid.sh@74 -- # [[ request: 00:12:54.753 { 00:12:54.753 "nqn": "nqn.2016-06.io.spdk:cnode23428", 00:12:54.753 "min_cntlid": 0, 00:12:54.753 "method": "nvmf_create_subsystem", 00:12:54.753 "req_id": 1 00:12:54.753 } 00:12:54.753 Got JSON-RPC error response 00:12:54.753 response: 00:12:54.753 { 00:12:54.753 "code": -32602, 00:12:54.753 "message": "Invalid cntlid range [0-65519]" 00:12:54.753 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:54.753 22:11:49 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7051 -i 65520 00:12:55.014 [2024-07-24 22:11:49.927984] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7051: invalid cntlid range [65520-65519] 00:12:55.014 22:11:49 -- target/invalid.sh@75 -- # out='request: 00:12:55.014 { 00:12:55.014 "nqn": "nqn.2016-06.io.spdk:cnode7051", 00:12:55.014 "min_cntlid": 65520, 00:12:55.014 "method": "nvmf_create_subsystem", 00:12:55.014 "req_id": 1 00:12:55.014 } 00:12:55.014 Got JSON-RPC error response 00:12:55.014 response: 00:12:55.014 { 00:12:55.014 "code": -32602, 00:12:55.014 "message": "Invalid cntlid range [65520-65519]" 00:12:55.014 }' 00:12:55.014 22:11:49 -- target/invalid.sh@76 -- # [[ request: 00:12:55.014 { 00:12:55.014 "nqn": "nqn.2016-06.io.spdk:cnode7051", 00:12:55.014 "min_cntlid": 65520, 00:12:55.014 "method": "nvmf_create_subsystem", 00:12:55.014 "req_id": 1 00:12:55.014 } 00:12:55.014 Got JSON-RPC error response 00:12:55.014 response: 00:12:55.014 { 00:12:55.014 "code": -32602, 00:12:55.014 "message": "Invalid cntlid range [65520-65519]" 00:12:55.014 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.014 22:11:49 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4705 -I 0 00:12:55.014 [2024-07-24 22:11:50.112700] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4705: invalid cntlid range [1-0] 00:12:55.014 22:11:50 -- target/invalid.sh@77 -- # out='request: 00:12:55.014 { 00:12:55.014 "nqn": "nqn.2016-06.io.spdk:cnode4705", 00:12:55.014 "max_cntlid": 0, 00:12:55.014 "method": "nvmf_create_subsystem", 00:12:55.014 "req_id": 1 00:12:55.014 } 00:12:55.014 Got JSON-RPC error response 00:12:55.014 response: 00:12:55.014 { 00:12:55.014 "code": -32602, 00:12:55.014 "message": "Invalid cntlid range [1-0]" 00:12:55.014 }' 00:12:55.014 22:11:50 -- target/invalid.sh@78 -- # [[ request: 00:12:55.014 { 00:12:55.014 "nqn": "nqn.2016-06.io.spdk:cnode4705", 00:12:55.014 "max_cntlid": 0, 00:12:55.014 "method": "nvmf_create_subsystem", 00:12:55.014 "req_id": 1 00:12:55.014 } 00:12:55.014 Got JSON-RPC error response 00:12:55.014 response: 00:12:55.014 { 00:12:55.014 "code": -32602, 00:12:55.014 "message": "Invalid cntlid range [1-0]" 00:12:55.014 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.014 22:11:50 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24094 -I 65520 00:12:55.274 [2024-07-24 22:11:50.297292] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24094: invalid cntlid range [1-65520] 00:12:55.274 22:11:50 -- target/invalid.sh@79 -- # out='request: 00:12:55.274 { 00:12:55.274 "nqn": "nqn.2016-06.io.spdk:cnode24094", 00:12:55.274 "max_cntlid": 65520, 00:12:55.274 "method": "nvmf_create_subsystem", 00:12:55.274 "req_id": 1 00:12:55.274 } 00:12:55.274 Got JSON-RPC error response 00:12:55.274 response: 00:12:55.274 { 00:12:55.274 "code": -32602, 00:12:55.274 "message": "Invalid cntlid range [1-65520]" 00:12:55.274 }' 00:12:55.274 22:11:50 -- target/invalid.sh@80 -- # [[ request: 00:12:55.274 { 00:12:55.274 "nqn": "nqn.2016-06.io.spdk:cnode24094", 00:12:55.274 "max_cntlid": 65520, 00:12:55.274 "method": "nvmf_create_subsystem", 00:12:55.274 "req_id": 1 00:12:55.274 } 00:12:55.274 Got JSON-RPC error response 00:12:55.274 response: 00:12:55.274 { 00:12:55.274 "code": -32602, 00:12:55.274 "message": "Invalid cntlid range [1-65520]" 00:12:55.274 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.274 22:11:50 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26851 -i 6 -I 5 00:12:55.534 [2024-07-24 22:11:50.489974] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26851: invalid cntlid range [6-5] 00:12:55.534 22:11:50 -- target/invalid.sh@83 -- # out='request: 00:12:55.534 { 00:12:55.534 "nqn": "nqn.2016-06.io.spdk:cnode26851", 00:12:55.534 "min_cntlid": 6, 00:12:55.534 "max_cntlid": 5, 00:12:55.534 "method": "nvmf_create_subsystem", 00:12:55.534 "req_id": 1 00:12:55.534 } 00:12:55.534 Got JSON-RPC error response 00:12:55.534 response: 00:12:55.534 { 00:12:55.534 "code": -32602, 00:12:55.534 "message": "Invalid cntlid range [6-5]" 00:12:55.534 }' 00:12:55.534 22:11:50 -- target/invalid.sh@84 -- # [[ request: 00:12:55.534 { 00:12:55.534 "nqn": "nqn.2016-06.io.spdk:cnode26851", 00:12:55.534 "min_cntlid": 6, 00:12:55.534 "max_cntlid": 5, 00:12:55.534 "method": "nvmf_create_subsystem", 00:12:55.534 "req_id": 1 00:12:55.534 } 00:12:55.535 Got JSON-RPC error response 00:12:55.535 response: 00:12:55.535 { 00:12:55.535 "code": -32602, 00:12:55.535 "message": "Invalid cntlid range [6-5]" 00:12:55.535 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:55.535 22:11:50 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:55.535 22:11:50 -- target/invalid.sh@87 -- # out='request: 00:12:55.535 { 00:12:55.535 "name": "foobar", 00:12:55.535 "method": "nvmf_delete_target", 00:12:55.535 "req_id": 1 00:12:55.535 } 00:12:55.535 Got JSON-RPC error response 00:12:55.535 response: 00:12:55.535 { 00:12:55.535 "code": -32602, 00:12:55.535 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:55.535 }' 00:12:55.535 22:11:50 -- target/invalid.sh@88 -- # [[ request: 00:12:55.535 { 00:12:55.535 "name": "foobar", 00:12:55.535 "method": "nvmf_delete_target", 00:12:55.535 "req_id": 1 00:12:55.535 } 00:12:55.535 Got JSON-RPC error response 00:12:55.535 response: 00:12:55.535 { 00:12:55.535 "code": -32602, 00:12:55.535 "message": "The specified target doesn't exist, cannot delete it." 00:12:55.535 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:55.535 22:11:50 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:55.535 22:11:50 -- target/invalid.sh@91 -- # nvmftestfini 00:12:55.535 22:11:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:55.535 22:11:50 -- nvmf/common.sh@116 -- # sync 00:12:55.535 22:11:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:55.535 22:11:50 -- nvmf/common.sh@119 -- # set +e 00:12:55.535 22:11:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:55.535 22:11:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:55.535 rmmod nvme_tcp 00:12:55.535 rmmod nvme_fabrics 00:12:55.795 rmmod nvme_keyring 00:12:55.795 22:11:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:55.795 22:11:50 -- nvmf/common.sh@123 -- # set -e 00:12:55.795 22:11:50 -- nvmf/common.sh@124 -- # return 0 00:12:55.795 22:11:50 -- nvmf/common.sh@477 -- # '[' -n 3474995 ']' 00:12:55.795 22:11:50 -- nvmf/common.sh@478 -- # killprocess 3474995 00:12:55.795 22:11:50 -- common/autotest_common.sh@926 -- # '[' -z 3474995 ']' 00:12:55.795 22:11:50 -- common/autotest_common.sh@930 -- # kill -0 3474995 00:12:55.795 22:11:50 -- common/autotest_common.sh@931 -- # uname 00:12:55.795 22:11:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:55.795 22:11:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3474995 00:12:55.795 22:11:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:55.795 22:11:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:55.795 22:11:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3474995' 00:12:55.795 killing process with pid 3474995 00:12:55.795 22:11:50 -- common/autotest_common.sh@945 -- # kill 3474995 00:12:55.795 22:11:50 -- common/autotest_common.sh@950 -- # wait 3474995 00:12:55.795 22:11:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:55.795 22:11:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:55.795 22:11:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:55.795 22:11:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.795 22:11:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:55.795 22:11:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.795 22:11:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.795 22:11:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.339 22:11:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:58.339 00:12:58.339 real 0m11.530s 00:12:58.339 user 0m19.408s 00:12:58.339 sys 0m4.912s 00:12:58.339 22:11:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:58.339 22:11:52 -- common/autotest_common.sh@10 -- # set +x 00:12:58.339 ************************************ 00:12:58.339 END TEST nvmf_invalid 00:12:58.339 ************************************ 00:12:58.339 22:11:53 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:58.339 22:11:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:58.339 22:11:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:58.339 22:11:53 -- common/autotest_common.sh@10 -- # set +x 00:12:58.339 ************************************ 00:12:58.339 START TEST nvmf_abort 00:12:58.339 ************************************ 00:12:58.339 22:11:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:58.339 * Looking for test storage... 00:12:58.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.339 22:11:53 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.339 22:11:53 -- nvmf/common.sh@7 -- # uname -s 00:12:58.339 22:11:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.339 22:11:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.339 22:11:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.339 22:11:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.339 22:11:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.339 22:11:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.339 22:11:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.339 22:11:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.339 22:11:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.339 22:11:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.339 22:11:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:58.339 22:11:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:58.339 22:11:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.339 22:11:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.339 22:11:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.339 22:11:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.339 22:11:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.339 22:11:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.339 22:11:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.340 22:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.340 22:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.340 22:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.340 22:11:53 -- paths/export.sh@5 -- # export PATH 00:12:58.340 22:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.340 22:11:53 -- nvmf/common.sh@46 -- # : 0 00:12:58.340 22:11:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:58.340 22:11:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:58.340 22:11:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:58.340 22:11:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.340 22:11:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.340 22:11:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:58.340 22:11:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:58.340 22:11:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:58.340 22:11:53 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.340 22:11:53 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:58.340 22:11:53 -- target/abort.sh@14 -- # nvmftestinit 00:12:58.340 22:11:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:58.340 22:11:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.340 22:11:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:58.340 22:11:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:58.340 22:11:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:58.340 22:11:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.340 22:11:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.340 22:11:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.340 22:11:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:58.340 22:11:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:58.340 22:11:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:58.340 22:11:53 -- common/autotest_common.sh@10 -- # set +x 00:13:03.625 22:11:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:03.625 22:11:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:03.625 22:11:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:03.625 22:11:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:03.625 22:11:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:03.626 22:11:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:03.626 22:11:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:03.626 22:11:58 -- nvmf/common.sh@294 -- # net_devs=() 00:13:03.626 22:11:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:03.626 22:11:58 -- nvmf/common.sh@295 -- # e810=() 00:13:03.626 22:11:58 -- nvmf/common.sh@295 -- # local -ga e810 00:13:03.626 22:11:58 -- nvmf/common.sh@296 -- # x722=() 00:13:03.626 22:11:58 -- nvmf/common.sh@296 -- # local -ga x722 00:13:03.626 22:11:58 -- nvmf/common.sh@297 -- # mlx=() 00:13:03.626 22:11:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:03.626 22:11:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.626 22:11:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:03.626 22:11:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:03.626 22:11:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:03.626 22:11:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:03.626 22:11:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:03.626 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:03.626 22:11:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:03.626 22:11:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:03.626 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:03.626 22:11:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:03.626 22:11:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:03.626 22:11:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.626 22:11:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:03.626 22:11:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.626 22:11:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:03.626 Found net devices under 0000:86:00.0: cvl_0_0 00:13:03.626 22:11:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.626 22:11:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:03.626 22:11:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.626 22:11:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:03.626 22:11:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.626 22:11:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:03.626 Found net devices under 0000:86:00.1: cvl_0_1 00:13:03.626 22:11:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.626 22:11:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:03.626 22:11:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:03.626 22:11:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:03.626 22:11:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.626 22:11:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.626 22:11:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.626 22:11:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:03.626 22:11:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.626 22:11:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.626 22:11:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:03.626 22:11:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.626 22:11:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.626 22:11:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:03.626 22:11:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:03.626 22:11:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.626 22:11:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.626 22:11:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.626 22:11:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.626 22:11:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:03.626 22:11:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.626 22:11:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.626 22:11:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.626 22:11:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:03.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:13:03.626 00:13:03.626 --- 10.0.0.2 ping statistics --- 00:13:03.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.626 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:13:03.626 22:11:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.445 ms 00:13:03.626 00:13:03.626 --- 10.0.0.1 ping statistics --- 00:13:03.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.626 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:13:03.626 22:11:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.626 22:11:58 -- nvmf/common.sh@410 -- # return 0 00:13:03.626 22:11:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:03.626 22:11:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.626 22:11:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:03.626 22:11:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.626 22:11:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:03.626 22:11:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:03.626 22:11:58 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:03.626 22:11:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:03.626 22:11:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:03.626 22:11:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.626 22:11:58 -- nvmf/common.sh@469 -- # nvmfpid=3479219 00:13:03.626 22:11:58 -- nvmf/common.sh@470 -- # waitforlisten 3479219 00:13:03.626 22:11:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:03.626 22:11:58 -- common/autotest_common.sh@819 -- # '[' -z 3479219 ']' 00:13:03.626 22:11:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.626 22:11:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:03.626 22:11:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.626 22:11:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:03.626 22:11:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.626 [2024-07-24 22:11:58.647107] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:03.626 [2024-07-24 22:11:58.647151] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.626 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.626 [2024-07-24 22:11:58.706335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.626 [2024-07-24 22:11:58.746515] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:03.626 [2024-07-24 22:11:58.746656] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.626 [2024-07-24 22:11:58.746665] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.626 [2024-07-24 22:11:58.746672] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.626 [2024-07-24 22:11:58.746739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.626 [2024-07-24 22:11:58.746776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.626 [2024-07-24 22:11:58.746778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.567 22:11:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:04.567 22:11:59 -- common/autotest_common.sh@852 -- # return 0 00:13:04.567 22:11:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:04.567 22:11:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:04.567 22:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:04.567 22:11:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.567 22:11:59 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:04.567 22:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.567 22:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:04.567 [2024-07-24 22:11:59.490066] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.567 22:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.567 22:11:59 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:04.567 22:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.567 22:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:04.567 Malloc0 00:13:04.567 22:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.567 22:11:59 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:04.567 22:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.567 22:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:04.567 Delay0 00:13:04.567 22:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.567 22:11:59 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:04.567 22:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.567 22:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:04.567 22:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.567 22:11:59 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:04.567 22:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.567 22:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:04.567 22:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.567 22:11:59 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:04.567 22:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.567 22:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:04.567 [2024-07-24 22:11:59.568951] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.567 22:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.567 22:11:59 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:04.567 22:11:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.567 22:11:59 -- common/autotest_common.sh@10 -- # set +x 00:13:04.567 22:11:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.567 22:11:59 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:04.567 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.567 [2024-07-24 22:11:59.637096] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:07.116 Initializing NVMe Controllers 00:13:07.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:07.116 controller IO queue size 128 less than required 00:13:07.116 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:07.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:07.116 Initialization complete. Launching workers. 00:13:07.116 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 112, failed: 42007 00:13:07.116 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42057, failed to submit 62 00:13:07.116 success 42007, unsuccess 50, failed 0 00:13:07.116 22:12:01 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:07.116 22:12:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.116 22:12:01 -- common/autotest_common.sh@10 -- # set +x 00:13:07.116 22:12:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.116 22:12:01 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:07.116 22:12:01 -- target/abort.sh@38 -- # nvmftestfini 00:13:07.116 22:12:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:07.116 22:12:01 -- nvmf/common.sh@116 -- # sync 00:13:07.116 22:12:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:07.116 22:12:01 -- nvmf/common.sh@119 -- # set +e 00:13:07.116 22:12:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:07.116 22:12:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:07.116 rmmod nvme_tcp 00:13:07.116 rmmod nvme_fabrics 00:13:07.116 rmmod nvme_keyring 00:13:07.116 22:12:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:07.116 22:12:01 -- nvmf/common.sh@123 -- # set -e 00:13:07.116 22:12:01 -- nvmf/common.sh@124 -- # return 0 00:13:07.116 22:12:01 -- nvmf/common.sh@477 -- # '[' -n 3479219 ']' 00:13:07.116 22:12:01 -- nvmf/common.sh@478 -- # killprocess 3479219 00:13:07.116 22:12:01 -- common/autotest_common.sh@926 -- # '[' -z 3479219 ']' 00:13:07.116 22:12:01 -- common/autotest_common.sh@930 -- # kill -0 3479219 00:13:07.116 22:12:01 -- common/autotest_common.sh@931 -- # uname 00:13:07.116 22:12:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:07.116 22:12:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3479219 00:13:07.116 22:12:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:07.116 22:12:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:07.116 22:12:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3479219' 00:13:07.116 killing process with pid 3479219 00:13:07.116 22:12:01 -- common/autotest_common.sh@945 -- # kill 3479219 00:13:07.116 22:12:01 -- common/autotest_common.sh@950 -- # wait 3479219 00:13:07.116 22:12:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:07.116 22:12:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:07.116 22:12:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:07.116 22:12:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.116 22:12:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:07.116 22:12:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.116 22:12:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.116 22:12:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.022 22:12:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:09.022 00:13:09.022 real 0m11.074s 00:13:09.022 user 0m12.842s 00:13:09.022 sys 0m5.126s 00:13:09.022 22:12:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.022 22:12:04 -- common/autotest_common.sh@10 -- # set +x 00:13:09.022 ************************************ 00:13:09.022 END TEST nvmf_abort 00:13:09.022 ************************************ 00:13:09.022 22:12:04 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:09.022 22:12:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:09.022 22:12:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:09.022 22:12:04 -- common/autotest_common.sh@10 -- # set +x 00:13:09.022 ************************************ 00:13:09.022 START TEST nvmf_ns_hotplug_stress 00:13:09.022 ************************************ 00:13:09.022 22:12:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:09.281 * Looking for test storage... 00:13:09.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.281 22:12:04 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.281 22:12:04 -- nvmf/common.sh@7 -- # uname -s 00:13:09.281 22:12:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.282 22:12:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.282 22:12:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.282 22:12:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.282 22:12:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.282 22:12:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.282 22:12:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.282 22:12:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.282 22:12:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.282 22:12:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.282 22:12:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.282 22:12:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.282 22:12:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.282 22:12:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.282 22:12:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.282 22:12:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.282 22:12:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.282 22:12:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.282 22:12:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.282 22:12:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.282 22:12:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.282 22:12:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.282 22:12:04 -- paths/export.sh@5 -- # export PATH 00:13:09.282 22:12:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.282 22:12:04 -- nvmf/common.sh@46 -- # : 0 00:13:09.282 22:12:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:09.282 22:12:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:09.282 22:12:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:09.282 22:12:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.282 22:12:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.282 22:12:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:09.282 22:12:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:09.282 22:12:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:09.282 22:12:04 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.282 22:12:04 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:09.282 22:12:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:09.282 22:12:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.282 22:12:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:09.282 22:12:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:09.282 22:12:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:09.282 22:12:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.282 22:12:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.282 22:12:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.282 22:12:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:09.282 22:12:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:09.282 22:12:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:09.282 22:12:04 -- common/autotest_common.sh@10 -- # set +x 00:13:14.628 22:12:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:14.628 22:12:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:14.628 22:12:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:14.628 22:12:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:14.628 22:12:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:14.628 22:12:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:14.628 22:12:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:14.628 22:12:09 -- nvmf/common.sh@294 -- # net_devs=() 00:13:14.628 22:12:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:14.628 22:12:09 -- nvmf/common.sh@295 -- # e810=() 00:13:14.628 22:12:09 -- nvmf/common.sh@295 -- # local -ga e810 00:13:14.628 22:12:09 -- nvmf/common.sh@296 -- # x722=() 00:13:14.628 22:12:09 -- nvmf/common.sh@296 -- # local -ga x722 00:13:14.628 22:12:09 -- nvmf/common.sh@297 -- # mlx=() 00:13:14.628 22:12:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:14.628 22:12:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.628 22:12:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:14.628 22:12:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:14.628 22:12:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:14.628 22:12:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:14.628 22:12:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:14.628 22:12:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:14.628 22:12:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:14.628 22:12:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:14.628 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:14.628 22:12:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:14.628 22:12:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:14.628 22:12:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.628 22:12:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.628 22:12:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:14.628 22:12:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:14.628 22:12:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:14.629 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:14.629 22:12:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:14.629 22:12:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:14.629 22:12:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:14.629 22:12:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:14.629 22:12:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:14.629 22:12:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:14.629 22:12:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:14.629 22:12:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:14.629 22:12:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:14.629 22:12:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.629 22:12:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:14.629 22:12:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.629 22:12:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:14.629 Found net devices under 0000:86:00.0: cvl_0_0 00:13:14.629 22:12:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.629 22:12:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:14.629 22:12:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.629 22:12:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:14.629 22:12:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.629 22:12:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:14.629 Found net devices under 0000:86:00.1: cvl_0_1 00:13:14.629 22:12:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.629 22:12:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:14.629 22:12:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:14.629 22:12:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:14.629 22:12:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:14.629 22:12:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:14.629 22:12:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:14.629 22:12:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:14.629 22:12:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:14.629 22:12:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:14.629 22:12:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:14.629 22:12:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:14.629 22:12:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:14.629 22:12:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:14.629 22:12:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:14.629 22:12:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:14.629 22:12:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:14.629 22:12:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:14.629 22:12:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:14.891 22:12:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:14.891 22:12:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:14.891 22:12:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:14.891 22:12:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:14.891 22:12:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:14.891 22:12:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:14.891 22:12:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:14.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:14.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:13:14.891 00:13:14.891 --- 10.0.0.2 ping statistics --- 00:13:14.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.891 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:14.891 22:12:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:14.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:14.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:13:14.891 00:13:14.891 --- 10.0.0.1 ping statistics --- 00:13:14.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:14.891 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:13:14.891 22:12:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:14.891 22:12:09 -- nvmf/common.sh@410 -- # return 0 00:13:14.891 22:12:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:14.891 22:12:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:14.891 22:12:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:14.891 22:12:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:14.891 22:12:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:14.891 22:12:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:14.891 22:12:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:14.891 22:12:09 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:14.891 22:12:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:14.891 22:12:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:14.891 22:12:09 -- common/autotest_common.sh@10 -- # set +x 00:13:14.891 22:12:09 -- nvmf/common.sh@469 -- # nvmfpid=3483832 00:13:14.891 22:12:09 -- nvmf/common.sh@470 -- # waitforlisten 3483832 00:13:14.891 22:12:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:14.892 22:12:09 -- common/autotest_common.sh@819 -- # '[' -z 3483832 ']' 00:13:14.892 22:12:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.892 22:12:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:14.892 22:12:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.892 22:12:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:14.892 22:12:09 -- common/autotest_common.sh@10 -- # set +x 00:13:15.152 [2024-07-24 22:12:10.027200] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:13:15.152 [2024-07-24 22:12:10.027257] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.152 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.152 [2024-07-24 22:12:10.089024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:15.152 [2024-07-24 22:12:10.127366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:15.152 [2024-07-24 22:12:10.127502] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.152 [2024-07-24 22:12:10.127511] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.152 [2024-07-24 22:12:10.127517] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.152 [2024-07-24 22:12:10.127638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.152 [2024-07-24 22:12:10.127727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.152 [2024-07-24 22:12:10.127727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.722 22:12:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:15.722 22:12:10 -- common/autotest_common.sh@852 -- # return 0 00:13:15.722 22:12:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:15.722 22:12:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:15.722 22:12:10 -- common/autotest_common.sh@10 -- # set +x 00:13:15.982 22:12:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.982 22:12:10 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:15.982 22:12:10 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:15.982 [2024-07-24 22:12:11.023312] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.982 22:12:11 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:16.242 22:12:11 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.502 [2024-07-24 22:12:11.384694] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.502 22:12:11 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:16.502 22:12:11 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:16.762 Malloc0 00:13:16.762 22:12:11 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:17.021 Delay0 00:13:17.021 22:12:11 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.021 22:12:12 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:17.281 NULL1 00:13:17.281 22:12:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:17.540 22:12:12 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:17.540 22:12:12 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3484252 00:13:17.540 22:12:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:17.541 22:12:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.541 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.541 22:12:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.800 22:12:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:17.800 22:12:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:18.060 true 00:13:18.060 22:12:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:18.060 22:12:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.060 22:12:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.320 22:12:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:18.320 22:12:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:18.578 true 00:13:18.578 22:12:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:18.578 22:12:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.837 22:12:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.837 22:12:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:18.837 22:12:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:19.096 true 00:13:19.096 22:12:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:19.096 22:12:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.356 22:12:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.356 22:12:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:19.356 22:12:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:19.616 true 00:13:19.616 22:12:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:19.616 22:12:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.876 22:12:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.135 22:12:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:20.135 22:12:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:20.135 true 00:13:20.135 22:12:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:20.135 22:12:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.394 22:12:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.656 22:12:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:20.656 22:12:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:20.656 true 00:13:20.656 22:12:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:20.656 22:12:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.917 22:12:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.177 22:12:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:21.177 22:12:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:21.437 true 00:13:21.437 22:12:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:21.437 22:12:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.437 22:12:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.697 22:12:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:21.697 22:12:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:21.956 true 00:13:21.956 22:12:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:21.956 22:12:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.216 22:12:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.216 22:12:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:22.216 22:12:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:22.475 true 00:13:22.475 22:12:17 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:22.475 22:12:17 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.735 22:12:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.735 22:12:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:22.735 22:12:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:22.996 true 00:13:22.996 22:12:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:22.996 22:12:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.255 22:12:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.255 22:12:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:23.255 22:12:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:23.515 true 00:13:23.515 22:12:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:23.515 22:12:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.774 22:12:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.034 22:12:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:24.034 22:12:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:24.034 true 00:13:24.034 22:12:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:24.034 22:12:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.294 22:12:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.552 22:12:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:24.552 22:12:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:24.552 true 00:13:24.811 22:12:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:24.811 22:12:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.811 22:12:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.070 22:12:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:25.070 22:12:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:25.329 true 00:13:25.329 22:12:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:25.329 22:12:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.329 22:12:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.589 22:12:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:25.589 22:12:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:25.849 true 00:13:25.849 22:12:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:25.849 22:12:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.109 22:12:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.109 22:12:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:26.109 22:12:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:26.432 true 00:13:26.432 22:12:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:26.432 22:12:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.432 22:12:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.701 22:12:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:26.701 22:12:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:26.961 true 00:13:26.961 22:12:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:26.961 22:12:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.220 22:12:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.220 22:12:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:27.220 22:12:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:27.480 true 00:13:27.480 22:12:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:27.480 22:12:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.739 22:12:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.739 22:12:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:27.739 22:12:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:27.999 true 00:13:27.999 22:12:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:27.999 22:12:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.258 22:12:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.258 22:12:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:28.258 22:12:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:28.516 true 00:13:28.516 22:12:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:28.516 22:12:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.775 22:12:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.034 22:12:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:29.034 22:12:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:29.034 true 00:13:29.034 22:12:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:29.034 22:12:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.294 22:12:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.553 22:12:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:29.553 22:12:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:29.553 true 00:13:29.553 22:12:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:29.553 22:12:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.844 22:12:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.104 22:12:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:30.104 22:12:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:30.104 true 00:13:30.104 22:12:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:30.104 22:12:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.364 22:12:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.624 22:12:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:30.624 22:12:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:30.624 true 00:13:30.883 22:12:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:30.883 22:12:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.883 22:12:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.142 22:12:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:31.143 22:12:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:31.402 true 00:13:31.402 22:12:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:31.402 22:12:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.402 22:12:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.661 22:12:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:31.661 22:12:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:31.920 true 00:13:31.920 22:12:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:31.920 22:12:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.180 22:12:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.180 22:12:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:32.180 22:12:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:32.440 true 00:13:32.440 22:12:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:32.440 22:12:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.700 22:12:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.700 22:12:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:32.700 22:12:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:32.960 true 00:13:32.960 22:12:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:32.960 22:12:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.219 22:12:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.479 22:12:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:33.479 22:12:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:33.479 true 00:13:33.479 22:12:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:33.479 22:12:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.739 22:12:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.998 22:12:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:33.998 22:12:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:33.998 true 00:13:33.998 22:12:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:33.998 22:12:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.258 22:12:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.517 22:12:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:13:34.517 22:12:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:34.777 true 00:13:34.777 22:12:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:34.777 22:12:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.777 22:12:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.037 22:12:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:13:35.037 22:12:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:35.297 true 00:13:35.297 22:12:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:35.297 22:12:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.297 22:12:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.556 22:12:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:13:35.556 22:12:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:13:35.816 true 00:13:35.816 22:12:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:35.816 22:12:30 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.075 22:12:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.075 22:12:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:13:36.075 22:12:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:13:36.334 true 00:13:36.334 22:12:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:36.334 22:12:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.593 22:12:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.593 22:12:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:13:36.593 22:12:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:13:36.851 true 00:13:36.851 22:12:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:36.851 22:12:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.110 22:12:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.370 22:12:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:13:37.370 22:12:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:13:37.370 true 00:13:37.370 22:12:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:37.370 22:12:32 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.629 22:12:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.888 22:12:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:13:37.889 22:12:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:13:37.889 true 00:13:37.889 22:12:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:37.889 22:12:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.148 22:12:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.407 22:12:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:13:38.407 22:12:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:13:38.407 true 00:13:38.667 22:12:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:38.667 22:12:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.667 22:12:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.925 22:12:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:13:38.925 22:12:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:13:39.185 true 00:13:39.185 22:12:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:39.185 22:12:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.185 22:12:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.444 22:12:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:13:39.444 22:12:34 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:13:39.703 true 00:13:39.703 22:12:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:39.704 22:12:34 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.998 22:12:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.998 22:12:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:13:39.998 22:12:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:13:40.263 true 00:13:40.263 22:12:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:40.263 22:12:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.522 22:12:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.522 22:12:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:13:40.522 22:12:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:13:40.782 true 00:13:40.782 22:12:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:40.782 22:12:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.041 22:12:35 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.041 22:12:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:13:41.041 22:12:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:13:41.300 true 00:13:41.300 22:12:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:41.300 22:12:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.560 22:12:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.818 22:12:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:13:41.818 22:12:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:13:41.818 true 00:13:41.818 22:12:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:41.818 22:12:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.076 22:12:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.335 22:12:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:13:42.335 22:12:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:13:42.335 true 00:13:42.335 22:12:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:42.335 22:12:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.594 22:12:37 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.854 22:12:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:13:42.854 22:12:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:13:43.113 true 00:13:43.113 22:12:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:43.113 22:12:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.113 22:12:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.372 22:12:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:13:43.372 22:12:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:13:43.632 true 00:13:43.632 22:12:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:43.632 22:12:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.891 22:12:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.891 22:12:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:13:43.891 22:12:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:13:44.151 true 00:13:44.151 22:12:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:44.151 22:12:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.411 22:12:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.411 22:12:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:13:44.411 22:12:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:13:44.670 true 00:13:44.670 22:12:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:44.670 22:12:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.930 22:12:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.189 22:12:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:13:45.189 22:12:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:13:45.189 true 00:13:45.189 22:12:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:45.189 22:12:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.449 22:12:40 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.708 22:12:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:13:45.708 22:12:40 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:13:45.708 true 00:13:45.968 22:12:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:45.968 22:12:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.968 22:12:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.228 22:12:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:13:46.228 22:12:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:13:46.488 true 00:13:46.488 22:12:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:46.488 22:12:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.488 22:12:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.747 22:12:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:13:46.747 22:12:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:13:47.007 true 00:13:47.007 22:12:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:47.007 22:12:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.267 22:12:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.267 22:12:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:13:47.267 22:12:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:13:47.526 true 00:13:47.526 22:12:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:47.526 22:12:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.786 Initializing NVMe Controllers 00:13:47.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:47.786 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:13:47.786 Controller IO queue size 128, less than required. 00:13:47.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:47.786 WARNING: Some requested NVMe devices were skipped 00:13:47.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:47.786 Initialization complete. Launching workers. 00:13:47.786 ======================================================== 00:13:47.786 Latency(us) 00:13:47.786 Device Information : IOPS MiB/s Average min max 00:13:47.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28803.20 14.06 4443.90 2335.26 14324.71 00:13:47.786 ======================================================== 00:13:47.786 Total : 28803.20 14.06 4443.90 2335.26 14324.71 00:13:47.786 00:13:47.786 22:12:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.045 22:12:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:13:48.045 22:12:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:13:48.045 true 00:13:48.045 22:12:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3484252 00:13:48.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3484252) - No such process 00:13:48.045 22:12:43 -- target/ns_hotplug_stress.sh@53 -- # wait 3484252 00:13:48.045 22:12:43 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.305 22:12:43 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:48.566 22:12:43 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:48.566 22:12:43 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:48.566 22:12:43 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:48.566 22:12:43 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:48.566 22:12:43 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:48.567 null0 00:13:48.567 22:12:43 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:48.567 22:12:43 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:48.567 22:12:43 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:48.826 null1 00:13:48.826 22:12:43 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:48.826 22:12:43 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:48.826 22:12:43 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:49.086 null2 00:13:49.086 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.086 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.086 22:12:44 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:49.086 null3 00:13:49.086 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.086 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.086 22:12:44 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:49.346 null4 00:13:49.346 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.346 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.346 22:12:44 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:49.606 null5 00:13:49.606 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.606 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.606 22:12:44 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:49.606 null6 00:13:49.606 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.606 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.606 22:12:44 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:49.865 null7 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:49.865 22:12:44 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@66 -- # wait 3489988 3489989 3489992 3489994 3489995 3489997 3489999 3490000 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:49.866 22:12:44 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.126 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.126 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.126 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.126 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.126 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.126 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.126 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.126 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.386 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.647 22:12:45 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.907 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.907 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.907 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.907 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.907 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.907 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.907 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.907 22:12:45 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.907 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.907 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.907 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.907 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.907 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.907 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.907 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.908 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.168 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.428 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.689 22:12:46 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.953 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.953 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.953 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.953 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.953 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.953 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.953 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.954 22:12:46 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.214 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.473 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.733 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.733 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.733 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.733 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.733 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.733 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.733 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.733 22:12:47 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.993 22:12:47 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.993 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.993 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.993 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.993 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.993 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.993 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.993 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.993 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.253 22:12:48 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:53.514 22:12:48 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:53.514 22:12:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:53.514 22:12:48 -- nvmf/common.sh@116 -- # sync 00:13:53.514 22:12:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:53.514 22:12:48 -- nvmf/common.sh@119 -- # set +e 00:13:53.514 22:12:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:53.514 22:12:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:53.514 rmmod nvme_tcp 00:13:53.845 rmmod nvme_fabrics 00:13:53.845 rmmod nvme_keyring 00:13:53.845 22:12:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:53.845 22:12:48 -- nvmf/common.sh@123 -- # set -e 00:13:53.845 22:12:48 -- nvmf/common.sh@124 -- # return 0 00:13:53.845 22:12:48 -- nvmf/common.sh@477 -- # '[' -n 3483832 ']' 00:13:53.845 22:12:48 -- nvmf/common.sh@478 -- # killprocess 3483832 00:13:53.845 22:12:48 -- common/autotest_common.sh@926 -- # '[' -z 3483832 ']' 00:13:53.845 22:12:48 -- common/autotest_common.sh@930 -- # kill -0 3483832 00:13:53.845 22:12:48 -- common/autotest_common.sh@931 -- # uname 00:13:53.845 22:12:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:53.845 22:12:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3483832 00:13:53.845 22:12:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:53.845 22:12:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:53.845 22:12:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3483832' 00:13:53.845 killing process with pid 3483832 00:13:53.845 22:12:48 -- common/autotest_common.sh@945 -- # kill 3483832 00:13:53.845 22:12:48 -- common/autotest_common.sh@950 -- # wait 3483832 00:13:53.845 22:12:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:53.845 22:12:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:53.845 22:12:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:53.845 22:12:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:53.845 22:12:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:53.845 22:12:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.845 22:12:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.845 22:12:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.386 22:12:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:56.386 00:13:56.386 real 0m46.868s 00:13:56.386 user 3m16.896s 00:13:56.386 sys 0m17.232s 00:13:56.386 22:12:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:56.386 22:12:50 -- common/autotest_common.sh@10 -- # set +x 00:13:56.386 ************************************ 00:13:56.386 END TEST nvmf_ns_hotplug_stress 00:13:56.386 ************************************ 00:13:56.386 22:12:51 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:56.386 22:12:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:56.386 22:12:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:56.386 22:12:51 -- common/autotest_common.sh@10 -- # set +x 00:13:56.386 ************************************ 00:13:56.386 START TEST nvmf_connect_stress 00:13:56.386 ************************************ 00:13:56.386 22:12:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:56.386 * Looking for test storage... 00:13:56.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:56.386 22:12:51 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:56.386 22:12:51 -- nvmf/common.sh@7 -- # uname -s 00:13:56.386 22:12:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:56.386 22:12:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:56.386 22:12:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:56.386 22:12:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:56.386 22:12:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:56.386 22:12:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:56.386 22:12:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:56.386 22:12:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:56.386 22:12:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:56.386 22:12:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:56.386 22:12:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.386 22:12:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:56.386 22:12:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:56.386 22:12:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:56.386 22:12:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:56.386 22:12:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:56.386 22:12:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:56.386 22:12:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:56.386 22:12:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:56.386 22:12:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.386 22:12:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.386 22:12:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.386 22:12:51 -- paths/export.sh@5 -- # export PATH 00:13:56.386 22:12:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:56.386 22:12:51 -- nvmf/common.sh@46 -- # : 0 00:13:56.386 22:12:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:56.386 22:12:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:56.386 22:12:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:56.387 22:12:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:56.387 22:12:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:56.387 22:12:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:56.387 22:12:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:56.387 22:12:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:56.387 22:12:51 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:56.387 22:12:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:56.387 22:12:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:56.387 22:12:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:56.387 22:12:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:56.387 22:12:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:56.387 22:12:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.387 22:12:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.387 22:12:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:56.387 22:12:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:56.387 22:12:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:56.387 22:12:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:56.387 22:12:51 -- common/autotest_common.sh@10 -- # set +x 00:14:01.670 22:12:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:01.670 22:12:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:01.670 22:12:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:01.670 22:12:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:01.670 22:12:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:01.670 22:12:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:01.670 22:12:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:01.670 22:12:56 -- nvmf/common.sh@294 -- # net_devs=() 00:14:01.670 22:12:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:01.670 22:12:56 -- nvmf/common.sh@295 -- # e810=() 00:14:01.670 22:12:56 -- nvmf/common.sh@295 -- # local -ga e810 00:14:01.670 22:12:56 -- nvmf/common.sh@296 -- # x722=() 00:14:01.670 22:12:56 -- nvmf/common.sh@296 -- # local -ga x722 00:14:01.670 22:12:56 -- nvmf/common.sh@297 -- # mlx=() 00:14:01.670 22:12:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:01.670 22:12:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.670 22:12:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:01.670 22:12:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:01.670 22:12:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:01.670 22:12:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:01.670 22:12:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:01.670 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:01.670 22:12:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:01.670 22:12:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:01.670 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:01.670 22:12:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:01.670 22:12:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:01.670 22:12:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.670 22:12:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:01.670 22:12:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.670 22:12:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:01.670 Found net devices under 0000:86:00.0: cvl_0_0 00:14:01.670 22:12:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.670 22:12:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:01.670 22:12:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.670 22:12:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:01.670 22:12:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.670 22:12:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:01.670 Found net devices under 0000:86:00.1: cvl_0_1 00:14:01.670 22:12:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.670 22:12:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:01.670 22:12:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:01.670 22:12:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:01.670 22:12:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.670 22:12:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.670 22:12:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:01.670 22:12:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:01.670 22:12:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:01.670 22:12:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:01.670 22:12:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:01.670 22:12:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:01.670 22:12:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.670 22:12:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:01.670 22:12:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:01.670 22:12:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:01.670 22:12:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:01.670 22:12:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:01.670 22:12:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:01.670 22:12:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:01.670 22:12:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:01.670 22:12:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:01.670 22:12:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:01.670 22:12:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:01.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:14:01.670 00:14:01.670 --- 10.0.0.2 ping statistics --- 00:14:01.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.670 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:14:01.670 22:12:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:01.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:14:01.670 00:14:01.670 --- 10.0.0.1 ping statistics --- 00:14:01.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.670 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:14:01.670 22:12:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.670 22:12:56 -- nvmf/common.sh@410 -- # return 0 00:14:01.670 22:12:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:01.670 22:12:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.670 22:12:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:01.670 22:12:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:01.671 22:12:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.671 22:12:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:01.671 22:12:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:01.671 22:12:56 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:01.671 22:12:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:01.671 22:12:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:01.671 22:12:56 -- common/autotest_common.sh@10 -- # set +x 00:14:01.671 22:12:56 -- nvmf/common.sh@469 -- # nvmfpid=3494190 00:14:01.671 22:12:56 -- nvmf/common.sh@470 -- # waitforlisten 3494190 00:14:01.671 22:12:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:01.671 22:12:56 -- common/autotest_common.sh@819 -- # '[' -z 3494190 ']' 00:14:01.671 22:12:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.671 22:12:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:01.671 22:12:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.671 22:12:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:01.671 22:12:56 -- common/autotest_common.sh@10 -- # set +x 00:14:01.671 [2024-07-24 22:12:56.667917] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:01.671 [2024-07-24 22:12:56.667963] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.671 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.671 [2024-07-24 22:12:56.727636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:01.671 [2024-07-24 22:12:56.766987] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:01.671 [2024-07-24 22:12:56.767135] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.671 [2024-07-24 22:12:56.767146] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.671 [2024-07-24 22:12:56.767153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.671 [2024-07-24 22:12:56.767252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.671 [2024-07-24 22:12:56.767339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.671 [2024-07-24 22:12:56.767340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.612 22:12:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:02.612 22:12:57 -- common/autotest_common.sh@852 -- # return 0 00:14:02.612 22:12:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:02.612 22:12:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:02.612 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:14:02.612 22:12:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.612 22:12:57 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.612 22:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.612 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:14:02.612 [2024-07-24 22:12:57.510719] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.612 22:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.612 22:12:57 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:02.612 22:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.612 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:14:02.612 22:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.612 22:12:57 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.612 22:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.612 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:14:02.612 [2024-07-24 22:12:57.550192] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.612 22:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.612 22:12:57 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:02.612 22:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.612 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:14:02.612 NULL1 00:14:02.612 22:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.612 22:12:57 -- target/connect_stress.sh@21 -- # PERF_PID=3494425 00:14:02.612 22:12:57 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.612 22:12:57 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:02.612 22:12:57 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:02.612 22:12:57 -- target/connect_stress.sh@28 -- # cat 00:14:02.612 22:12:57 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:02.612 22:12:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.612 22:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.612 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:14:02.872 22:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.872 22:12:57 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:02.872 22:12:57 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.872 22:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.872 22:12:57 -- common/autotest_common.sh@10 -- # set +x 00:14:03.441 22:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.441 22:12:58 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:03.441 22:12:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.441 22:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.441 22:12:58 -- common/autotest_common.sh@10 -- # set +x 00:14:03.701 22:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.701 22:12:58 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:03.701 22:12:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.701 22:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.701 22:12:58 -- common/autotest_common.sh@10 -- # set +x 00:14:03.961 22:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:03.961 22:12:58 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:03.961 22:12:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.961 22:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:03.961 22:12:58 -- common/autotest_common.sh@10 -- # set +x 00:14:04.220 22:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.220 22:12:59 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:04.220 22:12:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.220 22:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.220 22:12:59 -- common/autotest_common.sh@10 -- # set +x 00:14:04.480 22:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:04.480 22:12:59 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:04.480 22:12:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.480 22:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:04.480 22:12:59 -- common/autotest_common.sh@10 -- # set +x 00:14:05.050 22:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.050 22:12:59 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:05.050 22:12:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.050 22:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.050 22:12:59 -- common/autotest_common.sh@10 -- # set +x 00:14:05.310 22:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.310 22:13:00 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:05.310 22:13:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.310 22:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.310 22:13:00 -- common/autotest_common.sh@10 -- # set +x 00:14:05.570 22:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.570 22:13:00 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:05.570 22:13:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.570 22:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.570 22:13:00 -- common/autotest_common.sh@10 -- # set +x 00:14:05.830 22:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.830 22:13:00 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:05.830 22:13:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.830 22:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.830 22:13:00 -- common/autotest_common.sh@10 -- # set +x 00:14:06.089 22:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.089 22:13:01 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:06.089 22:13:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.089 22:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.089 22:13:01 -- common/autotest_common.sh@10 -- # set +x 00:14:06.659 22:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.659 22:13:01 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:06.659 22:13:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.659 22:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.659 22:13:01 -- common/autotest_common.sh@10 -- # set +x 00:14:06.919 22:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:06.919 22:13:01 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:06.919 22:13:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.919 22:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:06.919 22:13:01 -- common/autotest_common.sh@10 -- # set +x 00:14:07.178 22:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.179 22:13:02 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:07.179 22:13:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.179 22:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.179 22:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:07.438 22:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.438 22:13:02 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:07.438 22:13:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.438 22:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.438 22:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:07.698 22:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.698 22:13:02 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:07.698 22:13:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.698 22:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.698 22:13:02 -- common/autotest_common.sh@10 -- # set +x 00:14:08.267 22:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.267 22:13:03 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:08.267 22:13:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.267 22:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.267 22:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:08.526 22:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.526 22:13:03 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:08.526 22:13:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.526 22:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.526 22:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:08.786 22:13:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:08.786 22:13:03 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:08.786 22:13:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.786 22:13:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:08.786 22:13:03 -- common/autotest_common.sh@10 -- # set +x 00:14:09.045 22:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.045 22:13:04 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:09.045 22:13:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.045 22:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.045 22:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:09.305 22:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.305 22:13:04 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:09.305 22:13:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.305 22:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.305 22:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:09.872 22:13:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:09.872 22:13:04 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:09.872 22:13:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.872 22:13:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:09.872 22:13:04 -- common/autotest_common.sh@10 -- # set +x 00:14:10.130 22:13:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.130 22:13:05 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:10.130 22:13:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.130 22:13:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.130 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:14:10.388 22:13:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.388 22:13:05 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:10.388 22:13:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.388 22:13:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.388 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:14:10.647 22:13:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.647 22:13:05 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:10.647 22:13:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.647 22:13:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.647 22:13:05 -- common/autotest_common.sh@10 -- # set +x 00:14:11.214 22:13:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.214 22:13:06 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:11.214 22:13:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.214 22:13:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.214 22:13:06 -- common/autotest_common.sh@10 -- # set +x 00:14:11.473 22:13:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.473 22:13:06 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:11.474 22:13:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.474 22:13:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.474 22:13:06 -- common/autotest_common.sh@10 -- # set +x 00:14:11.733 22:13:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.733 22:13:06 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:11.733 22:13:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.733 22:13:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.733 22:13:06 -- common/autotest_common.sh@10 -- # set +x 00:14:11.992 22:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:11.992 22:13:07 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:11.992 22:13:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.992 22:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:11.992 22:13:07 -- common/autotest_common.sh@10 -- # set +x 00:14:12.288 22:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.288 22:13:07 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:12.288 22:13:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.288 22:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.288 22:13:07 -- common/autotest_common.sh@10 -- # set +x 00:14:12.547 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:12.807 22:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.807 22:13:07 -- target/connect_stress.sh@34 -- # kill -0 3494425 00:14:12.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3494425) - No such process 00:14:12.807 22:13:07 -- target/connect_stress.sh@38 -- # wait 3494425 00:14:12.807 22:13:07 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:12.807 22:13:07 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:12.807 22:13:07 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:12.807 22:13:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:12.807 22:13:07 -- nvmf/common.sh@116 -- # sync 00:14:12.807 22:13:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:12.807 22:13:07 -- nvmf/common.sh@119 -- # set +e 00:14:12.807 22:13:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:12.807 22:13:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:12.807 rmmod nvme_tcp 00:14:12.807 rmmod nvme_fabrics 00:14:12.807 rmmod nvme_keyring 00:14:12.807 22:13:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:12.807 22:13:07 -- nvmf/common.sh@123 -- # set -e 00:14:12.807 22:13:07 -- nvmf/common.sh@124 -- # return 0 00:14:12.807 22:13:07 -- nvmf/common.sh@477 -- # '[' -n 3494190 ']' 00:14:12.807 22:13:07 -- nvmf/common.sh@478 -- # killprocess 3494190 00:14:12.807 22:13:07 -- common/autotest_common.sh@926 -- # '[' -z 3494190 ']' 00:14:12.807 22:13:07 -- common/autotest_common.sh@930 -- # kill -0 3494190 00:14:12.807 22:13:07 -- common/autotest_common.sh@931 -- # uname 00:14:12.807 22:13:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:12.807 22:13:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3494190 00:14:12.807 22:13:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:12.807 22:13:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:12.807 22:13:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3494190' 00:14:12.807 killing process with pid 3494190 00:14:12.807 22:13:07 -- common/autotest_common.sh@945 -- # kill 3494190 00:14:12.807 22:13:07 -- common/autotest_common.sh@950 -- # wait 3494190 00:14:13.067 22:13:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:13.067 22:13:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:13.067 22:13:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:13.067 22:13:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.067 22:13:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:13.067 22:13:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.067 22:13:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.067 22:13:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.975 22:13:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:14.975 00:14:14.975 real 0m18.998s 00:14:14.975 user 0m40.842s 00:14:14.975 sys 0m8.071s 00:14:14.975 22:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.975 22:13:10 -- common/autotest_common.sh@10 -- # set +x 00:14:14.975 ************************************ 00:14:14.975 END TEST nvmf_connect_stress 00:14:14.975 ************************************ 00:14:14.975 22:13:10 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:14.975 22:13:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:14.975 22:13:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:14.975 22:13:10 -- common/autotest_common.sh@10 -- # set +x 00:14:14.975 ************************************ 00:14:14.975 START TEST nvmf_fused_ordering 00:14:14.975 ************************************ 00:14:14.975 22:13:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:15.235 * Looking for test storage... 00:14:15.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:15.235 22:13:10 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:15.235 22:13:10 -- nvmf/common.sh@7 -- # uname -s 00:14:15.235 22:13:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.235 22:13:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.235 22:13:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.235 22:13:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.235 22:13:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.235 22:13:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.235 22:13:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.235 22:13:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.235 22:13:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.235 22:13:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.235 22:13:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.235 22:13:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:15.235 22:13:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.235 22:13:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.235 22:13:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:15.235 22:13:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:15.235 22:13:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.235 22:13:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.235 22:13:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.235 22:13:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.235 22:13:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.235 22:13:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.235 22:13:10 -- paths/export.sh@5 -- # export PATH 00:14:15.235 22:13:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.235 22:13:10 -- nvmf/common.sh@46 -- # : 0 00:14:15.235 22:13:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:15.235 22:13:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:15.235 22:13:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:15.235 22:13:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.235 22:13:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.235 22:13:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:15.235 22:13:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:15.235 22:13:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:15.235 22:13:10 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:15.235 22:13:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:15.235 22:13:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.235 22:13:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:15.235 22:13:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:15.235 22:13:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:15.235 22:13:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.235 22:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:15.235 22:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.235 22:13:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:15.235 22:13:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:15.235 22:13:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:15.235 22:13:10 -- common/autotest_common.sh@10 -- # set +x 00:14:20.514 22:13:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:20.514 22:13:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:20.514 22:13:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:20.514 22:13:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:20.514 22:13:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:20.514 22:13:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:20.514 22:13:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:20.514 22:13:14 -- nvmf/common.sh@294 -- # net_devs=() 00:14:20.514 22:13:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:20.514 22:13:14 -- nvmf/common.sh@295 -- # e810=() 00:14:20.514 22:13:14 -- nvmf/common.sh@295 -- # local -ga e810 00:14:20.514 22:13:14 -- nvmf/common.sh@296 -- # x722=() 00:14:20.514 22:13:14 -- nvmf/common.sh@296 -- # local -ga x722 00:14:20.514 22:13:14 -- nvmf/common.sh@297 -- # mlx=() 00:14:20.514 22:13:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:20.514 22:13:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.514 22:13:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:20.514 22:13:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:20.514 22:13:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:20.514 22:13:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:20.514 22:13:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:20.514 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:20.514 22:13:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:20.514 22:13:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:20.514 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:20.514 22:13:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:20.514 22:13:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:20.514 22:13:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:20.514 22:13:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.514 22:13:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:20.515 22:13:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.515 22:13:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:20.515 Found net devices under 0000:86:00.0: cvl_0_0 00:14:20.515 22:13:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.515 22:13:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:20.515 22:13:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.515 22:13:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:20.515 22:13:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.515 22:13:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:20.515 Found net devices under 0000:86:00.1: cvl_0_1 00:14:20.515 22:13:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.515 22:13:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:20.515 22:13:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:20.515 22:13:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:20.515 22:13:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:20.515 22:13:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:20.515 22:13:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.515 22:13:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.515 22:13:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.515 22:13:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:20.515 22:13:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.515 22:13:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.515 22:13:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:20.515 22:13:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.515 22:13:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.515 22:13:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:20.515 22:13:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:20.515 22:13:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.515 22:13:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.515 22:13:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.515 22:13:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.515 22:13:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:20.515 22:13:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.515 22:13:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.515 22:13:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.515 22:13:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:20.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:14:20.515 00:14:20.515 --- 10.0.0.2 ping statistics --- 00:14:20.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.515 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:14:20.515 22:13:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:14:20.515 00:14:20.515 --- 10.0.0.1 ping statistics --- 00:14:20.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.515 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:14:20.515 22:13:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.515 22:13:15 -- nvmf/common.sh@410 -- # return 0 00:14:20.515 22:13:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:20.515 22:13:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.515 22:13:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:20.515 22:13:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:20.515 22:13:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.515 22:13:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:20.515 22:13:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:20.515 22:13:15 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:20.515 22:13:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:20.515 22:13:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:20.515 22:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:20.515 22:13:15 -- nvmf/common.sh@469 -- # nvmfpid=3499569 00:14:20.515 22:13:15 -- nvmf/common.sh@470 -- # waitforlisten 3499569 00:14:20.515 22:13:15 -- common/autotest_common.sh@819 -- # '[' -z 3499569 ']' 00:14:20.515 22:13:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.515 22:13:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:20.515 22:13:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.515 22:13:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:20.515 22:13:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:20.515 22:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:20.515 [2024-07-24 22:13:15.087640] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:20.515 [2024-07-24 22:13:15.087685] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.515 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.515 [2024-07-24 22:13:15.141522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.515 [2024-07-24 22:13:15.180902] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:20.515 [2024-07-24 22:13:15.181026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.515 [2024-07-24 22:13:15.181034] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.515 [2024-07-24 22:13:15.181040] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.515 [2024-07-24 22:13:15.181063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.774 22:13:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:20.774 22:13:15 -- common/autotest_common.sh@852 -- # return 0 00:14:20.774 22:13:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:20.775 22:13:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:20.775 22:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:21.035 22:13:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.035 22:13:15 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.035 22:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.035 22:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:21.035 [2024-07-24 22:13:15.927595] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.035 22:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.035 22:13:15 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.035 22:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.035 22:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:21.035 22:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.035 22:13:15 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.035 22:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.035 22:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:21.035 [2024-07-24 22:13:15.943727] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.035 22:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.035 22:13:15 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:21.035 22:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.035 22:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:21.035 NULL1 00:14:21.035 22:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.035 22:13:15 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:21.035 22:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.035 22:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:21.035 22:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.035 22:13:15 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:21.035 22:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.035 22:13:15 -- common/autotest_common.sh@10 -- # set +x 00:14:21.035 22:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.035 22:13:15 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:21.035 [2024-07-24 22:13:15.995155] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:21.035 [2024-07-24 22:13:15.995195] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499638 ] 00:14:21.035 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.975 Attached to nqn.2016-06.io.spdk:cnode1 00:14:21.975 Namespace ID: 1 size: 1GB 00:14:21.975 fused_ordering(0) 00:14:21.975 fused_ordering(1) 00:14:21.975 fused_ordering(2) 00:14:21.975 fused_ordering(3) 00:14:21.975 fused_ordering(4) 00:14:21.975 fused_ordering(5) 00:14:21.975 fused_ordering(6) 00:14:21.975 fused_ordering(7) 00:14:21.975 fused_ordering(8) 00:14:21.975 fused_ordering(9) 00:14:21.975 fused_ordering(10) 00:14:21.975 fused_ordering(11) 00:14:21.975 fused_ordering(12) 00:14:21.975 fused_ordering(13) 00:14:21.975 fused_ordering(14) 00:14:21.975 fused_ordering(15) 00:14:21.975 fused_ordering(16) 00:14:21.975 fused_ordering(17) 00:14:21.975 fused_ordering(18) 00:14:21.975 fused_ordering(19) 00:14:21.975 fused_ordering(20) 00:14:21.975 fused_ordering(21) 00:14:21.975 fused_ordering(22) 00:14:21.975 fused_ordering(23) 00:14:21.975 fused_ordering(24) 00:14:21.975 fused_ordering(25) 00:14:21.975 fused_ordering(26) 00:14:21.975 fused_ordering(27) 00:14:21.975 fused_ordering(28) 00:14:21.975 fused_ordering(29) 00:14:21.975 fused_ordering(30) 00:14:21.975 fused_ordering(31) 00:14:21.975 fused_ordering(32) 00:14:21.975 fused_ordering(33) 00:14:21.975 fused_ordering(34) 00:14:21.975 fused_ordering(35) 00:14:21.975 fused_ordering(36) 00:14:21.975 fused_ordering(37) 00:14:21.975 fused_ordering(38) 00:14:21.975 fused_ordering(39) 00:14:21.975 fused_ordering(40) 00:14:21.975 fused_ordering(41) 00:14:21.975 fused_ordering(42) 00:14:21.975 fused_ordering(43) 00:14:21.975 fused_ordering(44) 00:14:21.975 fused_ordering(45) 00:14:21.975 fused_ordering(46) 00:14:21.975 fused_ordering(47) 00:14:21.975 fused_ordering(48) 00:14:21.975 fused_ordering(49) 00:14:21.975 fused_ordering(50) 00:14:21.975 fused_ordering(51) 00:14:21.975 fused_ordering(52) 00:14:21.975 fused_ordering(53) 00:14:21.975 fused_ordering(54) 00:14:21.975 fused_ordering(55) 00:14:21.975 fused_ordering(56) 00:14:21.975 fused_ordering(57) 00:14:21.975 fused_ordering(58) 00:14:21.975 fused_ordering(59) 00:14:21.975 fused_ordering(60) 00:14:21.975 fused_ordering(61) 00:14:21.975 fused_ordering(62) 00:14:21.975 fused_ordering(63) 00:14:21.975 fused_ordering(64) 00:14:21.975 fused_ordering(65) 00:14:21.975 fused_ordering(66) 00:14:21.975 fused_ordering(67) 00:14:21.975 fused_ordering(68) 00:14:21.975 fused_ordering(69) 00:14:21.975 fused_ordering(70) 00:14:21.975 fused_ordering(71) 00:14:21.975 fused_ordering(72) 00:14:21.975 fused_ordering(73) 00:14:21.975 fused_ordering(74) 00:14:21.975 fused_ordering(75) 00:14:21.975 fused_ordering(76) 00:14:21.975 fused_ordering(77) 00:14:21.975 fused_ordering(78) 00:14:21.975 fused_ordering(79) 00:14:21.975 fused_ordering(80) 00:14:21.975 fused_ordering(81) 00:14:21.975 fused_ordering(82) 00:14:21.975 fused_ordering(83) 00:14:21.975 fused_ordering(84) 00:14:21.975 fused_ordering(85) 00:14:21.975 fused_ordering(86) 00:14:21.975 fused_ordering(87) 00:14:21.975 fused_ordering(88) 00:14:21.975 fused_ordering(89) 00:14:21.975 fused_ordering(90) 00:14:21.975 fused_ordering(91) 00:14:21.975 fused_ordering(92) 00:14:21.975 fused_ordering(93) 00:14:21.975 fused_ordering(94) 00:14:21.975 fused_ordering(95) 00:14:21.975 fused_ordering(96) 00:14:21.975 fused_ordering(97) 00:14:21.975 fused_ordering(98) 00:14:21.975 fused_ordering(99) 00:14:21.975 fused_ordering(100) 00:14:21.975 fused_ordering(101) 00:14:21.975 fused_ordering(102) 00:14:21.975 fused_ordering(103) 00:14:21.975 fused_ordering(104) 00:14:21.975 fused_ordering(105) 00:14:21.975 fused_ordering(106) 00:14:21.975 fused_ordering(107) 00:14:21.975 fused_ordering(108) 00:14:21.975 fused_ordering(109) 00:14:21.975 fused_ordering(110) 00:14:21.975 fused_ordering(111) 00:14:21.975 fused_ordering(112) 00:14:21.975 fused_ordering(113) 00:14:21.975 fused_ordering(114) 00:14:21.975 fused_ordering(115) 00:14:21.975 fused_ordering(116) 00:14:21.975 fused_ordering(117) 00:14:21.975 fused_ordering(118) 00:14:21.975 fused_ordering(119) 00:14:21.975 fused_ordering(120) 00:14:21.975 fused_ordering(121) 00:14:21.975 fused_ordering(122) 00:14:21.975 fused_ordering(123) 00:14:21.975 fused_ordering(124) 00:14:21.975 fused_ordering(125) 00:14:21.975 fused_ordering(126) 00:14:21.975 fused_ordering(127) 00:14:21.975 fused_ordering(128) 00:14:21.975 fused_ordering(129) 00:14:21.975 fused_ordering(130) 00:14:21.975 fused_ordering(131) 00:14:21.975 fused_ordering(132) 00:14:21.975 fused_ordering(133) 00:14:21.975 fused_ordering(134) 00:14:21.975 fused_ordering(135) 00:14:21.975 fused_ordering(136) 00:14:21.975 fused_ordering(137) 00:14:21.975 fused_ordering(138) 00:14:21.975 fused_ordering(139) 00:14:21.975 fused_ordering(140) 00:14:21.975 fused_ordering(141) 00:14:21.975 fused_ordering(142) 00:14:21.975 fused_ordering(143) 00:14:21.975 fused_ordering(144) 00:14:21.975 fused_ordering(145) 00:14:21.975 fused_ordering(146) 00:14:21.975 fused_ordering(147) 00:14:21.975 fused_ordering(148) 00:14:21.975 fused_ordering(149) 00:14:21.975 fused_ordering(150) 00:14:21.975 fused_ordering(151) 00:14:21.975 fused_ordering(152) 00:14:21.975 fused_ordering(153) 00:14:21.975 fused_ordering(154) 00:14:21.975 fused_ordering(155) 00:14:21.975 fused_ordering(156) 00:14:21.975 fused_ordering(157) 00:14:21.976 fused_ordering(158) 00:14:21.976 fused_ordering(159) 00:14:21.976 fused_ordering(160) 00:14:21.976 fused_ordering(161) 00:14:21.976 fused_ordering(162) 00:14:21.976 fused_ordering(163) 00:14:21.976 fused_ordering(164) 00:14:21.976 fused_ordering(165) 00:14:21.976 fused_ordering(166) 00:14:21.976 fused_ordering(167) 00:14:21.976 fused_ordering(168) 00:14:21.976 fused_ordering(169) 00:14:21.976 fused_ordering(170) 00:14:21.976 fused_ordering(171) 00:14:21.976 fused_ordering(172) 00:14:21.976 fused_ordering(173) 00:14:21.976 fused_ordering(174) 00:14:21.976 fused_ordering(175) 00:14:21.976 fused_ordering(176) 00:14:21.976 fused_ordering(177) 00:14:21.976 fused_ordering(178) 00:14:21.976 fused_ordering(179) 00:14:21.976 fused_ordering(180) 00:14:21.976 fused_ordering(181) 00:14:21.976 fused_ordering(182) 00:14:21.976 fused_ordering(183) 00:14:21.976 fused_ordering(184) 00:14:21.976 fused_ordering(185) 00:14:21.976 fused_ordering(186) 00:14:21.976 fused_ordering(187) 00:14:21.976 fused_ordering(188) 00:14:21.976 fused_ordering(189) 00:14:21.976 fused_ordering(190) 00:14:21.976 fused_ordering(191) 00:14:21.976 fused_ordering(192) 00:14:21.976 fused_ordering(193) 00:14:21.976 fused_ordering(194) 00:14:21.976 fused_ordering(195) 00:14:21.976 fused_ordering(196) 00:14:21.976 fused_ordering(197) 00:14:21.976 fused_ordering(198) 00:14:21.976 fused_ordering(199) 00:14:21.976 fused_ordering(200) 00:14:21.976 fused_ordering(201) 00:14:21.976 fused_ordering(202) 00:14:21.976 fused_ordering(203) 00:14:21.976 fused_ordering(204) 00:14:21.976 fused_ordering(205) 00:14:22.916 fused_ordering(206) 00:14:22.916 fused_ordering(207) 00:14:22.916 fused_ordering(208) 00:14:22.916 fused_ordering(209) 00:14:22.916 fused_ordering(210) 00:14:22.916 fused_ordering(211) 00:14:22.916 fused_ordering(212) 00:14:22.916 fused_ordering(213) 00:14:22.916 fused_ordering(214) 00:14:22.916 fused_ordering(215) 00:14:22.916 fused_ordering(216) 00:14:22.916 fused_ordering(217) 00:14:22.916 fused_ordering(218) 00:14:22.916 fused_ordering(219) 00:14:22.916 fused_ordering(220) 00:14:22.916 fused_ordering(221) 00:14:22.916 fused_ordering(222) 00:14:22.916 fused_ordering(223) 00:14:22.916 fused_ordering(224) 00:14:22.916 fused_ordering(225) 00:14:22.916 fused_ordering(226) 00:14:22.916 fused_ordering(227) 00:14:22.916 fused_ordering(228) 00:14:22.916 fused_ordering(229) 00:14:22.916 fused_ordering(230) 00:14:22.916 fused_ordering(231) 00:14:22.916 fused_ordering(232) 00:14:22.916 fused_ordering(233) 00:14:22.916 fused_ordering(234) 00:14:22.916 fused_ordering(235) 00:14:22.916 fused_ordering(236) 00:14:22.916 fused_ordering(237) 00:14:22.916 fused_ordering(238) 00:14:22.916 fused_ordering(239) 00:14:22.916 fused_ordering(240) 00:14:22.916 fused_ordering(241) 00:14:22.916 fused_ordering(242) 00:14:22.916 fused_ordering(243) 00:14:22.916 fused_ordering(244) 00:14:22.916 fused_ordering(245) 00:14:22.916 fused_ordering(246) 00:14:22.916 fused_ordering(247) 00:14:22.916 fused_ordering(248) 00:14:22.916 fused_ordering(249) 00:14:22.916 fused_ordering(250) 00:14:22.916 fused_ordering(251) 00:14:22.916 fused_ordering(252) 00:14:22.916 fused_ordering(253) 00:14:22.916 fused_ordering(254) 00:14:22.916 fused_ordering(255) 00:14:22.916 fused_ordering(256) 00:14:22.916 fused_ordering(257) 00:14:22.916 fused_ordering(258) 00:14:22.916 fused_ordering(259) 00:14:22.916 fused_ordering(260) 00:14:22.916 fused_ordering(261) 00:14:22.916 fused_ordering(262) 00:14:22.916 fused_ordering(263) 00:14:22.916 fused_ordering(264) 00:14:22.916 fused_ordering(265) 00:14:22.916 fused_ordering(266) 00:14:22.916 fused_ordering(267) 00:14:22.916 fused_ordering(268) 00:14:22.916 fused_ordering(269) 00:14:22.916 fused_ordering(270) 00:14:22.916 fused_ordering(271) 00:14:22.916 fused_ordering(272) 00:14:22.916 fused_ordering(273) 00:14:22.916 fused_ordering(274) 00:14:22.916 fused_ordering(275) 00:14:22.916 fused_ordering(276) 00:14:22.916 fused_ordering(277) 00:14:22.916 fused_ordering(278) 00:14:22.916 fused_ordering(279) 00:14:22.916 fused_ordering(280) 00:14:22.916 fused_ordering(281) 00:14:22.916 fused_ordering(282) 00:14:22.916 fused_ordering(283) 00:14:22.916 fused_ordering(284) 00:14:22.916 fused_ordering(285) 00:14:22.916 fused_ordering(286) 00:14:22.916 fused_ordering(287) 00:14:22.916 fused_ordering(288) 00:14:22.916 fused_ordering(289) 00:14:22.916 fused_ordering(290) 00:14:22.916 fused_ordering(291) 00:14:22.916 fused_ordering(292) 00:14:22.916 fused_ordering(293) 00:14:22.916 fused_ordering(294) 00:14:22.916 fused_ordering(295) 00:14:22.916 fused_ordering(296) 00:14:22.916 fused_ordering(297) 00:14:22.916 fused_ordering(298) 00:14:22.916 fused_ordering(299) 00:14:22.916 fused_ordering(300) 00:14:22.916 fused_ordering(301) 00:14:22.916 fused_ordering(302) 00:14:22.916 fused_ordering(303) 00:14:22.916 fused_ordering(304) 00:14:22.916 fused_ordering(305) 00:14:22.916 fused_ordering(306) 00:14:22.916 fused_ordering(307) 00:14:22.916 fused_ordering(308) 00:14:22.916 fused_ordering(309) 00:14:22.916 fused_ordering(310) 00:14:22.916 fused_ordering(311) 00:14:22.916 fused_ordering(312) 00:14:22.916 fused_ordering(313) 00:14:22.916 fused_ordering(314) 00:14:22.916 fused_ordering(315) 00:14:22.916 fused_ordering(316) 00:14:22.916 fused_ordering(317) 00:14:22.916 fused_ordering(318) 00:14:22.916 fused_ordering(319) 00:14:22.916 fused_ordering(320) 00:14:22.916 fused_ordering(321) 00:14:22.916 fused_ordering(322) 00:14:22.916 fused_ordering(323) 00:14:22.916 fused_ordering(324) 00:14:22.916 fused_ordering(325) 00:14:22.916 fused_ordering(326) 00:14:22.916 fused_ordering(327) 00:14:22.916 fused_ordering(328) 00:14:22.916 fused_ordering(329) 00:14:22.916 fused_ordering(330) 00:14:22.916 fused_ordering(331) 00:14:22.916 fused_ordering(332) 00:14:22.916 fused_ordering(333) 00:14:22.916 fused_ordering(334) 00:14:22.916 fused_ordering(335) 00:14:22.916 fused_ordering(336) 00:14:22.916 fused_ordering(337) 00:14:22.916 fused_ordering(338) 00:14:22.916 fused_ordering(339) 00:14:22.916 fused_ordering(340) 00:14:22.916 fused_ordering(341) 00:14:22.916 fused_ordering(342) 00:14:22.916 fused_ordering(343) 00:14:22.916 fused_ordering(344) 00:14:22.916 fused_ordering(345) 00:14:22.916 fused_ordering(346) 00:14:22.916 fused_ordering(347) 00:14:22.916 fused_ordering(348) 00:14:22.916 fused_ordering(349) 00:14:22.916 fused_ordering(350) 00:14:22.916 fused_ordering(351) 00:14:22.916 fused_ordering(352) 00:14:22.916 fused_ordering(353) 00:14:22.916 fused_ordering(354) 00:14:22.916 fused_ordering(355) 00:14:22.916 fused_ordering(356) 00:14:22.916 fused_ordering(357) 00:14:22.916 fused_ordering(358) 00:14:22.916 fused_ordering(359) 00:14:22.916 fused_ordering(360) 00:14:22.916 fused_ordering(361) 00:14:22.917 fused_ordering(362) 00:14:22.917 fused_ordering(363) 00:14:22.917 fused_ordering(364) 00:14:22.917 fused_ordering(365) 00:14:22.917 fused_ordering(366) 00:14:22.917 fused_ordering(367) 00:14:22.917 fused_ordering(368) 00:14:22.917 fused_ordering(369) 00:14:22.917 fused_ordering(370) 00:14:22.917 fused_ordering(371) 00:14:22.917 fused_ordering(372) 00:14:22.917 fused_ordering(373) 00:14:22.917 fused_ordering(374) 00:14:22.917 fused_ordering(375) 00:14:22.917 fused_ordering(376) 00:14:22.917 fused_ordering(377) 00:14:22.917 fused_ordering(378) 00:14:22.917 fused_ordering(379) 00:14:22.917 fused_ordering(380) 00:14:22.917 fused_ordering(381) 00:14:22.917 fused_ordering(382) 00:14:22.917 fused_ordering(383) 00:14:22.917 fused_ordering(384) 00:14:22.917 fused_ordering(385) 00:14:22.917 fused_ordering(386) 00:14:22.917 fused_ordering(387) 00:14:22.917 fused_ordering(388) 00:14:22.917 fused_ordering(389) 00:14:22.917 fused_ordering(390) 00:14:22.917 fused_ordering(391) 00:14:22.917 fused_ordering(392) 00:14:22.917 fused_ordering(393) 00:14:22.917 fused_ordering(394) 00:14:22.917 fused_ordering(395) 00:14:22.917 fused_ordering(396) 00:14:22.917 fused_ordering(397) 00:14:22.917 fused_ordering(398) 00:14:22.917 fused_ordering(399) 00:14:22.917 fused_ordering(400) 00:14:22.917 fused_ordering(401) 00:14:22.917 fused_ordering(402) 00:14:22.917 fused_ordering(403) 00:14:22.917 fused_ordering(404) 00:14:22.917 fused_ordering(405) 00:14:22.917 fused_ordering(406) 00:14:22.917 fused_ordering(407) 00:14:22.917 fused_ordering(408) 00:14:22.917 fused_ordering(409) 00:14:22.917 fused_ordering(410) 00:14:23.487 fused_ordering(411) 00:14:23.487 fused_ordering(412) 00:14:23.487 fused_ordering(413) 00:14:23.487 fused_ordering(414) 00:14:23.487 fused_ordering(415) 00:14:23.487 fused_ordering(416) 00:14:23.487 fused_ordering(417) 00:14:23.487 fused_ordering(418) 00:14:23.487 fused_ordering(419) 00:14:23.487 fused_ordering(420) 00:14:23.487 fused_ordering(421) 00:14:23.487 fused_ordering(422) 00:14:23.487 fused_ordering(423) 00:14:23.487 fused_ordering(424) 00:14:23.487 fused_ordering(425) 00:14:23.487 fused_ordering(426) 00:14:23.487 fused_ordering(427) 00:14:23.487 fused_ordering(428) 00:14:23.487 fused_ordering(429) 00:14:23.487 fused_ordering(430) 00:14:23.487 fused_ordering(431) 00:14:23.487 fused_ordering(432) 00:14:23.487 fused_ordering(433) 00:14:23.487 fused_ordering(434) 00:14:23.487 fused_ordering(435) 00:14:23.487 fused_ordering(436) 00:14:23.487 fused_ordering(437) 00:14:23.487 fused_ordering(438) 00:14:23.487 fused_ordering(439) 00:14:23.487 fused_ordering(440) 00:14:23.487 fused_ordering(441) 00:14:23.487 fused_ordering(442) 00:14:23.487 fused_ordering(443) 00:14:23.487 fused_ordering(444) 00:14:23.487 fused_ordering(445) 00:14:23.487 fused_ordering(446) 00:14:23.487 fused_ordering(447) 00:14:23.487 fused_ordering(448) 00:14:23.487 fused_ordering(449) 00:14:23.487 fused_ordering(450) 00:14:23.487 fused_ordering(451) 00:14:23.487 fused_ordering(452) 00:14:23.487 fused_ordering(453) 00:14:23.487 fused_ordering(454) 00:14:23.487 fused_ordering(455) 00:14:23.487 fused_ordering(456) 00:14:23.487 fused_ordering(457) 00:14:23.487 fused_ordering(458) 00:14:23.487 fused_ordering(459) 00:14:23.487 fused_ordering(460) 00:14:23.487 fused_ordering(461) 00:14:23.487 fused_ordering(462) 00:14:23.487 fused_ordering(463) 00:14:23.487 fused_ordering(464) 00:14:23.487 fused_ordering(465) 00:14:23.487 fused_ordering(466) 00:14:23.487 fused_ordering(467) 00:14:23.487 fused_ordering(468) 00:14:23.487 fused_ordering(469) 00:14:23.487 fused_ordering(470) 00:14:23.487 fused_ordering(471) 00:14:23.487 fused_ordering(472) 00:14:23.487 fused_ordering(473) 00:14:23.487 fused_ordering(474) 00:14:23.487 fused_ordering(475) 00:14:23.487 fused_ordering(476) 00:14:23.487 fused_ordering(477) 00:14:23.487 fused_ordering(478) 00:14:23.487 fused_ordering(479) 00:14:23.487 fused_ordering(480) 00:14:23.487 fused_ordering(481) 00:14:23.487 fused_ordering(482) 00:14:23.487 fused_ordering(483) 00:14:23.487 fused_ordering(484) 00:14:23.487 fused_ordering(485) 00:14:23.487 fused_ordering(486) 00:14:23.487 fused_ordering(487) 00:14:23.487 fused_ordering(488) 00:14:23.487 fused_ordering(489) 00:14:23.487 fused_ordering(490) 00:14:23.487 fused_ordering(491) 00:14:23.487 fused_ordering(492) 00:14:23.487 fused_ordering(493) 00:14:23.487 fused_ordering(494) 00:14:23.487 fused_ordering(495) 00:14:23.487 fused_ordering(496) 00:14:23.487 fused_ordering(497) 00:14:23.487 fused_ordering(498) 00:14:23.487 fused_ordering(499) 00:14:23.487 fused_ordering(500) 00:14:23.487 fused_ordering(501) 00:14:23.487 fused_ordering(502) 00:14:23.487 fused_ordering(503) 00:14:23.487 fused_ordering(504) 00:14:23.487 fused_ordering(505) 00:14:23.487 fused_ordering(506) 00:14:23.487 fused_ordering(507) 00:14:23.487 fused_ordering(508) 00:14:23.487 fused_ordering(509) 00:14:23.487 fused_ordering(510) 00:14:23.487 fused_ordering(511) 00:14:23.487 fused_ordering(512) 00:14:23.487 fused_ordering(513) 00:14:23.487 fused_ordering(514) 00:14:23.487 fused_ordering(515) 00:14:23.487 fused_ordering(516) 00:14:23.487 fused_ordering(517) 00:14:23.487 fused_ordering(518) 00:14:23.487 fused_ordering(519) 00:14:23.487 fused_ordering(520) 00:14:23.487 fused_ordering(521) 00:14:23.487 fused_ordering(522) 00:14:23.487 fused_ordering(523) 00:14:23.487 fused_ordering(524) 00:14:23.487 fused_ordering(525) 00:14:23.487 fused_ordering(526) 00:14:23.487 fused_ordering(527) 00:14:23.487 fused_ordering(528) 00:14:23.487 fused_ordering(529) 00:14:23.487 fused_ordering(530) 00:14:23.487 fused_ordering(531) 00:14:23.487 fused_ordering(532) 00:14:23.487 fused_ordering(533) 00:14:23.487 fused_ordering(534) 00:14:23.487 fused_ordering(535) 00:14:23.487 fused_ordering(536) 00:14:23.487 fused_ordering(537) 00:14:23.487 fused_ordering(538) 00:14:23.487 fused_ordering(539) 00:14:23.487 fused_ordering(540) 00:14:23.487 fused_ordering(541) 00:14:23.487 fused_ordering(542) 00:14:23.487 fused_ordering(543) 00:14:23.487 fused_ordering(544) 00:14:23.487 fused_ordering(545) 00:14:23.487 fused_ordering(546) 00:14:23.487 fused_ordering(547) 00:14:23.487 fused_ordering(548) 00:14:23.487 fused_ordering(549) 00:14:23.487 fused_ordering(550) 00:14:23.487 fused_ordering(551) 00:14:23.487 fused_ordering(552) 00:14:23.488 fused_ordering(553) 00:14:23.488 fused_ordering(554) 00:14:23.488 fused_ordering(555) 00:14:23.488 fused_ordering(556) 00:14:23.488 fused_ordering(557) 00:14:23.488 fused_ordering(558) 00:14:23.488 fused_ordering(559) 00:14:23.488 fused_ordering(560) 00:14:23.488 fused_ordering(561) 00:14:23.488 fused_ordering(562) 00:14:23.488 fused_ordering(563) 00:14:23.488 fused_ordering(564) 00:14:23.488 fused_ordering(565) 00:14:23.488 fused_ordering(566) 00:14:23.488 fused_ordering(567) 00:14:23.488 fused_ordering(568) 00:14:23.488 fused_ordering(569) 00:14:23.488 fused_ordering(570) 00:14:23.488 fused_ordering(571) 00:14:23.488 fused_ordering(572) 00:14:23.488 fused_ordering(573) 00:14:23.488 fused_ordering(574) 00:14:23.488 fused_ordering(575) 00:14:23.488 fused_ordering(576) 00:14:23.488 fused_ordering(577) 00:14:23.488 fused_ordering(578) 00:14:23.488 fused_ordering(579) 00:14:23.488 fused_ordering(580) 00:14:23.488 fused_ordering(581) 00:14:23.488 fused_ordering(582) 00:14:23.488 fused_ordering(583) 00:14:23.488 fused_ordering(584) 00:14:23.488 fused_ordering(585) 00:14:23.488 fused_ordering(586) 00:14:23.488 fused_ordering(587) 00:14:23.488 fused_ordering(588) 00:14:23.488 fused_ordering(589) 00:14:23.488 fused_ordering(590) 00:14:23.488 fused_ordering(591) 00:14:23.488 fused_ordering(592) 00:14:23.488 fused_ordering(593) 00:14:23.488 fused_ordering(594) 00:14:23.488 fused_ordering(595) 00:14:23.488 fused_ordering(596) 00:14:23.488 fused_ordering(597) 00:14:23.488 fused_ordering(598) 00:14:23.488 fused_ordering(599) 00:14:23.488 fused_ordering(600) 00:14:23.488 fused_ordering(601) 00:14:23.488 fused_ordering(602) 00:14:23.488 fused_ordering(603) 00:14:23.488 fused_ordering(604) 00:14:23.488 fused_ordering(605) 00:14:23.488 fused_ordering(606) 00:14:23.488 fused_ordering(607) 00:14:23.488 fused_ordering(608) 00:14:23.488 fused_ordering(609) 00:14:23.488 fused_ordering(610) 00:14:23.488 fused_ordering(611) 00:14:23.488 fused_ordering(612) 00:14:23.488 fused_ordering(613) 00:14:23.488 fused_ordering(614) 00:14:23.488 fused_ordering(615) 00:14:24.429 fused_ordering(616) 00:14:24.429 fused_ordering(617) 00:14:24.429 fused_ordering(618) 00:14:24.429 fused_ordering(619) 00:14:24.429 fused_ordering(620) 00:14:24.429 fused_ordering(621) 00:14:24.429 fused_ordering(622) 00:14:24.429 fused_ordering(623) 00:14:24.429 fused_ordering(624) 00:14:24.429 fused_ordering(625) 00:14:24.429 fused_ordering(626) 00:14:24.429 fused_ordering(627) 00:14:24.429 fused_ordering(628) 00:14:24.429 fused_ordering(629) 00:14:24.429 fused_ordering(630) 00:14:24.429 fused_ordering(631) 00:14:24.429 fused_ordering(632) 00:14:24.429 fused_ordering(633) 00:14:24.429 fused_ordering(634) 00:14:24.429 fused_ordering(635) 00:14:24.429 fused_ordering(636) 00:14:24.429 fused_ordering(637) 00:14:24.429 fused_ordering(638) 00:14:24.429 fused_ordering(639) 00:14:24.429 fused_ordering(640) 00:14:24.429 fused_ordering(641) 00:14:24.429 fused_ordering(642) 00:14:24.429 fused_ordering(643) 00:14:24.429 fused_ordering(644) 00:14:24.429 fused_ordering(645) 00:14:24.429 fused_ordering(646) 00:14:24.429 fused_ordering(647) 00:14:24.429 fused_ordering(648) 00:14:24.429 fused_ordering(649) 00:14:24.429 fused_ordering(650) 00:14:24.429 fused_ordering(651) 00:14:24.429 fused_ordering(652) 00:14:24.429 fused_ordering(653) 00:14:24.429 fused_ordering(654) 00:14:24.429 fused_ordering(655) 00:14:24.429 fused_ordering(656) 00:14:24.429 fused_ordering(657) 00:14:24.429 fused_ordering(658) 00:14:24.429 fused_ordering(659) 00:14:24.429 fused_ordering(660) 00:14:24.429 fused_ordering(661) 00:14:24.429 fused_ordering(662) 00:14:24.429 fused_ordering(663) 00:14:24.429 fused_ordering(664) 00:14:24.429 fused_ordering(665) 00:14:24.429 fused_ordering(666) 00:14:24.429 fused_ordering(667) 00:14:24.429 fused_ordering(668) 00:14:24.429 fused_ordering(669) 00:14:24.429 fused_ordering(670) 00:14:24.429 fused_ordering(671) 00:14:24.429 fused_ordering(672) 00:14:24.429 fused_ordering(673) 00:14:24.429 fused_ordering(674) 00:14:24.429 fused_ordering(675) 00:14:24.429 fused_ordering(676) 00:14:24.429 fused_ordering(677) 00:14:24.429 fused_ordering(678) 00:14:24.429 fused_ordering(679) 00:14:24.429 fused_ordering(680) 00:14:24.429 fused_ordering(681) 00:14:24.429 fused_ordering(682) 00:14:24.429 fused_ordering(683) 00:14:24.429 fused_ordering(684) 00:14:24.429 fused_ordering(685) 00:14:24.429 fused_ordering(686) 00:14:24.429 fused_ordering(687) 00:14:24.429 fused_ordering(688) 00:14:24.429 fused_ordering(689) 00:14:24.429 fused_ordering(690) 00:14:24.429 fused_ordering(691) 00:14:24.429 fused_ordering(692) 00:14:24.429 fused_ordering(693) 00:14:24.429 fused_ordering(694) 00:14:24.429 fused_ordering(695) 00:14:24.429 fused_ordering(696) 00:14:24.429 fused_ordering(697) 00:14:24.429 fused_ordering(698) 00:14:24.429 fused_ordering(699) 00:14:24.429 fused_ordering(700) 00:14:24.429 fused_ordering(701) 00:14:24.429 fused_ordering(702) 00:14:24.429 fused_ordering(703) 00:14:24.429 fused_ordering(704) 00:14:24.429 fused_ordering(705) 00:14:24.429 fused_ordering(706) 00:14:24.429 fused_ordering(707) 00:14:24.429 fused_ordering(708) 00:14:24.429 fused_ordering(709) 00:14:24.429 fused_ordering(710) 00:14:24.429 fused_ordering(711) 00:14:24.429 fused_ordering(712) 00:14:24.429 fused_ordering(713) 00:14:24.429 fused_ordering(714) 00:14:24.429 fused_ordering(715) 00:14:24.429 fused_ordering(716) 00:14:24.429 fused_ordering(717) 00:14:24.429 fused_ordering(718) 00:14:24.429 fused_ordering(719) 00:14:24.429 fused_ordering(720) 00:14:24.429 fused_ordering(721) 00:14:24.429 fused_ordering(722) 00:14:24.429 fused_ordering(723) 00:14:24.429 fused_ordering(724) 00:14:24.429 fused_ordering(725) 00:14:24.429 fused_ordering(726) 00:14:24.429 fused_ordering(727) 00:14:24.429 fused_ordering(728) 00:14:24.429 fused_ordering(729) 00:14:24.429 fused_ordering(730) 00:14:24.429 fused_ordering(731) 00:14:24.429 fused_ordering(732) 00:14:24.429 fused_ordering(733) 00:14:24.429 fused_ordering(734) 00:14:24.429 fused_ordering(735) 00:14:24.429 fused_ordering(736) 00:14:24.429 fused_ordering(737) 00:14:24.429 fused_ordering(738) 00:14:24.429 fused_ordering(739) 00:14:24.429 fused_ordering(740) 00:14:24.429 fused_ordering(741) 00:14:24.429 fused_ordering(742) 00:14:24.429 fused_ordering(743) 00:14:24.429 fused_ordering(744) 00:14:24.429 fused_ordering(745) 00:14:24.429 fused_ordering(746) 00:14:24.429 fused_ordering(747) 00:14:24.429 fused_ordering(748) 00:14:24.429 fused_ordering(749) 00:14:24.429 fused_ordering(750) 00:14:24.429 fused_ordering(751) 00:14:24.429 fused_ordering(752) 00:14:24.429 fused_ordering(753) 00:14:24.429 fused_ordering(754) 00:14:24.429 fused_ordering(755) 00:14:24.429 fused_ordering(756) 00:14:24.429 fused_ordering(757) 00:14:24.429 fused_ordering(758) 00:14:24.429 fused_ordering(759) 00:14:24.429 fused_ordering(760) 00:14:24.429 fused_ordering(761) 00:14:24.429 fused_ordering(762) 00:14:24.429 fused_ordering(763) 00:14:24.429 fused_ordering(764) 00:14:24.429 fused_ordering(765) 00:14:24.429 fused_ordering(766) 00:14:24.429 fused_ordering(767) 00:14:24.429 fused_ordering(768) 00:14:24.429 fused_ordering(769) 00:14:24.429 fused_ordering(770) 00:14:24.429 fused_ordering(771) 00:14:24.429 fused_ordering(772) 00:14:24.429 fused_ordering(773) 00:14:24.429 fused_ordering(774) 00:14:24.429 fused_ordering(775) 00:14:24.429 fused_ordering(776) 00:14:24.429 fused_ordering(777) 00:14:24.429 fused_ordering(778) 00:14:24.429 fused_ordering(779) 00:14:24.429 fused_ordering(780) 00:14:24.429 fused_ordering(781) 00:14:24.429 fused_ordering(782) 00:14:24.429 fused_ordering(783) 00:14:24.429 fused_ordering(784) 00:14:24.429 fused_ordering(785) 00:14:24.429 fused_ordering(786) 00:14:24.429 fused_ordering(787) 00:14:24.429 fused_ordering(788) 00:14:24.429 fused_ordering(789) 00:14:24.429 fused_ordering(790) 00:14:24.429 fused_ordering(791) 00:14:24.429 fused_ordering(792) 00:14:24.429 fused_ordering(793) 00:14:24.429 fused_ordering(794) 00:14:24.429 fused_ordering(795) 00:14:24.429 fused_ordering(796) 00:14:24.429 fused_ordering(797) 00:14:24.429 fused_ordering(798) 00:14:24.429 fused_ordering(799) 00:14:24.429 fused_ordering(800) 00:14:24.429 fused_ordering(801) 00:14:24.429 fused_ordering(802) 00:14:24.429 fused_ordering(803) 00:14:24.429 fused_ordering(804) 00:14:24.429 fused_ordering(805) 00:14:24.429 fused_ordering(806) 00:14:24.429 fused_ordering(807) 00:14:24.429 fused_ordering(808) 00:14:24.429 fused_ordering(809) 00:14:24.429 fused_ordering(810) 00:14:24.429 fused_ordering(811) 00:14:24.429 fused_ordering(812) 00:14:24.429 fused_ordering(813) 00:14:24.429 fused_ordering(814) 00:14:24.429 fused_ordering(815) 00:14:24.429 fused_ordering(816) 00:14:24.429 fused_ordering(817) 00:14:24.429 fused_ordering(818) 00:14:24.429 fused_ordering(819) 00:14:24.429 fused_ordering(820) 00:14:25.811 fused_ordering(821) 00:14:25.811 fused_ordering(822) 00:14:25.812 fused_ordering(823) 00:14:25.812 fused_ordering(824) 00:14:25.812 fused_ordering(825) 00:14:25.812 fused_ordering(826) 00:14:25.812 fused_ordering(827) 00:14:25.812 fused_ordering(828) 00:14:25.812 fused_ordering(829) 00:14:25.812 fused_ordering(830) 00:14:25.812 fused_ordering(831) 00:14:25.812 fused_ordering(832) 00:14:25.812 fused_ordering(833) 00:14:25.812 fused_ordering(834) 00:14:25.812 fused_ordering(835) 00:14:25.812 fused_ordering(836) 00:14:25.812 fused_ordering(837) 00:14:25.812 fused_ordering(838) 00:14:25.812 fused_ordering(839) 00:14:25.812 fused_ordering(840) 00:14:25.812 fused_ordering(841) 00:14:25.812 fused_ordering(842) 00:14:25.812 fused_ordering(843) 00:14:25.812 fused_ordering(844) 00:14:25.812 fused_ordering(845) 00:14:25.812 fused_ordering(846) 00:14:25.812 fused_ordering(847) 00:14:25.812 fused_ordering(848) 00:14:25.812 fused_ordering(849) 00:14:25.812 fused_ordering(850) 00:14:25.812 fused_ordering(851) 00:14:25.812 fused_ordering(852) 00:14:25.812 fused_ordering(853) 00:14:25.812 fused_ordering(854) 00:14:25.812 fused_ordering(855) 00:14:25.812 fused_ordering(856) 00:14:25.812 fused_ordering(857) 00:14:25.812 fused_ordering(858) 00:14:25.812 fused_ordering(859) 00:14:25.812 fused_ordering(860) 00:14:25.812 fused_ordering(861) 00:14:25.812 fused_ordering(862) 00:14:25.812 fused_ordering(863) 00:14:25.812 fused_ordering(864) 00:14:25.812 fused_ordering(865) 00:14:25.812 fused_ordering(866) 00:14:25.812 fused_ordering(867) 00:14:25.812 fused_ordering(868) 00:14:25.812 fused_ordering(869) 00:14:25.812 fused_ordering(870) 00:14:25.812 fused_ordering(871) 00:14:25.812 fused_ordering(872) 00:14:25.812 fused_ordering(873) 00:14:25.812 fused_ordering(874) 00:14:25.812 fused_ordering(875) 00:14:25.812 fused_ordering(876) 00:14:25.812 fused_ordering(877) 00:14:25.812 fused_ordering(878) 00:14:25.812 fused_ordering(879) 00:14:25.812 fused_ordering(880) 00:14:25.812 fused_ordering(881) 00:14:25.812 fused_ordering(882) 00:14:25.812 fused_ordering(883) 00:14:25.812 fused_ordering(884) 00:14:25.812 fused_ordering(885) 00:14:25.812 fused_ordering(886) 00:14:25.812 fused_ordering(887) 00:14:25.812 fused_ordering(888) 00:14:25.812 fused_ordering(889) 00:14:25.812 fused_ordering(890) 00:14:25.812 fused_ordering(891) 00:14:25.812 fused_ordering(892) 00:14:25.812 fused_ordering(893) 00:14:25.812 fused_ordering(894) 00:14:25.812 fused_ordering(895) 00:14:25.812 fused_ordering(896) 00:14:25.812 fused_ordering(897) 00:14:25.812 fused_ordering(898) 00:14:25.812 fused_ordering(899) 00:14:25.812 fused_ordering(900) 00:14:25.812 fused_ordering(901) 00:14:25.812 fused_ordering(902) 00:14:25.812 fused_ordering(903) 00:14:25.812 fused_ordering(904) 00:14:25.812 fused_ordering(905) 00:14:25.812 fused_ordering(906) 00:14:25.812 fused_ordering(907) 00:14:25.812 fused_ordering(908) 00:14:25.812 fused_ordering(909) 00:14:25.812 fused_ordering(910) 00:14:25.812 fused_ordering(911) 00:14:25.812 fused_ordering(912) 00:14:25.812 fused_ordering(913) 00:14:25.812 fused_ordering(914) 00:14:25.812 fused_ordering(915) 00:14:25.812 fused_ordering(916) 00:14:25.812 fused_ordering(917) 00:14:25.812 fused_ordering(918) 00:14:25.812 fused_ordering(919) 00:14:25.812 fused_ordering(920) 00:14:25.812 fused_ordering(921) 00:14:25.812 fused_ordering(922) 00:14:25.812 fused_ordering(923) 00:14:25.812 fused_ordering(924) 00:14:25.812 fused_ordering(925) 00:14:25.812 fused_ordering(926) 00:14:25.812 fused_ordering(927) 00:14:25.812 fused_ordering(928) 00:14:25.812 fused_ordering(929) 00:14:25.812 fused_ordering(930) 00:14:25.812 fused_ordering(931) 00:14:25.812 fused_ordering(932) 00:14:25.812 fused_ordering(933) 00:14:25.812 fused_ordering(934) 00:14:25.812 fused_ordering(935) 00:14:25.812 fused_ordering(936) 00:14:25.812 fused_ordering(937) 00:14:25.812 fused_ordering(938) 00:14:25.812 fused_ordering(939) 00:14:25.812 fused_ordering(940) 00:14:25.812 fused_ordering(941) 00:14:25.812 fused_ordering(942) 00:14:25.812 fused_ordering(943) 00:14:25.812 fused_ordering(944) 00:14:25.812 fused_ordering(945) 00:14:25.812 fused_ordering(946) 00:14:25.812 fused_ordering(947) 00:14:25.812 fused_ordering(948) 00:14:25.812 fused_ordering(949) 00:14:25.812 fused_ordering(950) 00:14:25.812 fused_ordering(951) 00:14:25.812 fused_ordering(952) 00:14:25.812 fused_ordering(953) 00:14:25.812 fused_ordering(954) 00:14:25.812 fused_ordering(955) 00:14:25.812 fused_ordering(956) 00:14:25.812 fused_ordering(957) 00:14:25.812 fused_ordering(958) 00:14:25.812 fused_ordering(959) 00:14:25.812 fused_ordering(960) 00:14:25.812 fused_ordering(961) 00:14:25.812 fused_ordering(962) 00:14:25.812 fused_ordering(963) 00:14:25.812 fused_ordering(964) 00:14:25.812 fused_ordering(965) 00:14:25.812 fused_ordering(966) 00:14:25.812 fused_ordering(967) 00:14:25.812 fused_ordering(968) 00:14:25.812 fused_ordering(969) 00:14:25.812 fused_ordering(970) 00:14:25.812 fused_ordering(971) 00:14:25.812 fused_ordering(972) 00:14:25.812 fused_ordering(973) 00:14:25.812 fused_ordering(974) 00:14:25.812 fused_ordering(975) 00:14:25.812 fused_ordering(976) 00:14:25.812 fused_ordering(977) 00:14:25.812 fused_ordering(978) 00:14:25.812 fused_ordering(979) 00:14:25.812 fused_ordering(980) 00:14:25.812 fused_ordering(981) 00:14:25.812 fused_ordering(982) 00:14:25.812 fused_ordering(983) 00:14:25.812 fused_ordering(984) 00:14:25.812 fused_ordering(985) 00:14:25.812 fused_ordering(986) 00:14:25.812 fused_ordering(987) 00:14:25.812 fused_ordering(988) 00:14:25.812 fused_ordering(989) 00:14:25.812 fused_ordering(990) 00:14:25.812 fused_ordering(991) 00:14:25.812 fused_ordering(992) 00:14:25.812 fused_ordering(993) 00:14:25.812 fused_ordering(994) 00:14:25.812 fused_ordering(995) 00:14:25.812 fused_ordering(996) 00:14:25.812 fused_ordering(997) 00:14:25.812 fused_ordering(998) 00:14:25.812 fused_ordering(999) 00:14:25.812 fused_ordering(1000) 00:14:25.812 fused_ordering(1001) 00:14:25.812 fused_ordering(1002) 00:14:25.812 fused_ordering(1003) 00:14:25.812 fused_ordering(1004) 00:14:25.812 fused_ordering(1005) 00:14:25.812 fused_ordering(1006) 00:14:25.812 fused_ordering(1007) 00:14:25.812 fused_ordering(1008) 00:14:25.812 fused_ordering(1009) 00:14:25.812 fused_ordering(1010) 00:14:25.812 fused_ordering(1011) 00:14:25.812 fused_ordering(1012) 00:14:25.812 fused_ordering(1013) 00:14:25.812 fused_ordering(1014) 00:14:25.812 fused_ordering(1015) 00:14:25.812 fused_ordering(1016) 00:14:25.812 fused_ordering(1017) 00:14:25.812 fused_ordering(1018) 00:14:25.812 fused_ordering(1019) 00:14:25.812 fused_ordering(1020) 00:14:25.812 fused_ordering(1021) 00:14:25.812 fused_ordering(1022) 00:14:25.812 fused_ordering(1023) 00:14:25.812 22:13:20 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:25.812 22:13:20 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:25.812 22:13:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:25.812 22:13:20 -- nvmf/common.sh@116 -- # sync 00:14:25.812 22:13:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:25.812 22:13:20 -- nvmf/common.sh@119 -- # set +e 00:14:25.812 22:13:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:25.812 22:13:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:25.812 rmmod nvme_tcp 00:14:25.812 rmmod nvme_fabrics 00:14:25.812 rmmod nvme_keyring 00:14:25.812 22:13:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:25.812 22:13:20 -- nvmf/common.sh@123 -- # set -e 00:14:25.812 22:13:20 -- nvmf/common.sh@124 -- # return 0 00:14:25.812 22:13:20 -- nvmf/common.sh@477 -- # '[' -n 3499569 ']' 00:14:25.812 22:13:20 -- nvmf/common.sh@478 -- # killprocess 3499569 00:14:25.812 22:13:20 -- common/autotest_common.sh@926 -- # '[' -z 3499569 ']' 00:14:25.812 22:13:20 -- common/autotest_common.sh@930 -- # kill -0 3499569 00:14:25.812 22:13:20 -- common/autotest_common.sh@931 -- # uname 00:14:25.812 22:13:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:25.812 22:13:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3499569 00:14:25.812 22:13:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:25.812 22:13:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:25.812 22:13:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3499569' 00:14:25.812 killing process with pid 3499569 00:14:25.812 22:13:20 -- common/autotest_common.sh@945 -- # kill 3499569 00:14:25.812 22:13:20 -- common/autotest_common.sh@950 -- # wait 3499569 00:14:25.812 22:13:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:25.812 22:13:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:25.812 22:13:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:25.812 22:13:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.812 22:13:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:25.812 22:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.812 22:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.812 22:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.355 22:13:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:28.355 00:14:28.355 real 0m12.809s 00:14:28.355 user 0m8.342s 00:14:28.355 sys 0m7.231s 00:14:28.355 22:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.355 22:13:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.355 ************************************ 00:14:28.355 END TEST nvmf_fused_ordering 00:14:28.355 ************************************ 00:14:28.355 22:13:22 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:28.355 22:13:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:28.355 22:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:28.355 22:13:22 -- common/autotest_common.sh@10 -- # set +x 00:14:28.355 ************************************ 00:14:28.355 START TEST nvmf_delete_subsystem 00:14:28.355 ************************************ 00:14:28.355 22:13:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:28.356 * Looking for test storage... 00:14:28.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.356 22:13:22 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.356 22:13:22 -- nvmf/common.sh@7 -- # uname -s 00:14:28.356 22:13:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.356 22:13:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.356 22:13:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.356 22:13:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.356 22:13:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.356 22:13:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.356 22:13:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.356 22:13:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.356 22:13:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.356 22:13:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.356 22:13:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:28.356 22:13:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:28.356 22:13:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.356 22:13:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.356 22:13:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.356 22:13:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.356 22:13:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.356 22:13:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.356 22:13:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.356 22:13:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.356 22:13:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.356 22:13:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.356 22:13:23 -- paths/export.sh@5 -- # export PATH 00:14:28.356 22:13:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.356 22:13:23 -- nvmf/common.sh@46 -- # : 0 00:14:28.356 22:13:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.356 22:13:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.356 22:13:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.356 22:13:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.356 22:13:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.356 22:13:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.356 22:13:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.356 22:13:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.356 22:13:23 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:28.356 22:13:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:28.356 22:13:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.356 22:13:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:28.356 22:13:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:28.356 22:13:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:28.356 22:13:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.356 22:13:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.356 22:13:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.356 22:13:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:28.356 22:13:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:28.356 22:13:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:28.356 22:13:23 -- common/autotest_common.sh@10 -- # set +x 00:14:33.638 22:13:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:33.638 22:13:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:33.638 22:13:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:33.638 22:13:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:33.638 22:13:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:33.638 22:13:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:33.638 22:13:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:33.638 22:13:28 -- nvmf/common.sh@294 -- # net_devs=() 00:14:33.638 22:13:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:33.638 22:13:28 -- nvmf/common.sh@295 -- # e810=() 00:14:33.638 22:13:28 -- nvmf/common.sh@295 -- # local -ga e810 00:14:33.638 22:13:28 -- nvmf/common.sh@296 -- # x722=() 00:14:33.638 22:13:28 -- nvmf/common.sh@296 -- # local -ga x722 00:14:33.638 22:13:28 -- nvmf/common.sh@297 -- # mlx=() 00:14:33.638 22:13:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:33.638 22:13:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:33.638 22:13:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:33.638 22:13:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:33.638 22:13:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:33.638 22:13:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:33.638 22:13:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:33.638 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:33.638 22:13:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:33.638 22:13:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:33.638 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:33.638 22:13:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:33.638 22:13:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:33.638 22:13:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.638 22:13:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:33.638 22:13:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.638 22:13:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:33.638 Found net devices under 0000:86:00.0: cvl_0_0 00:14:33.638 22:13:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.638 22:13:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:33.638 22:13:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:33.638 22:13:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:33.638 22:13:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:33.638 22:13:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:33.638 Found net devices under 0000:86:00.1: cvl_0_1 00:14:33.638 22:13:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:33.638 22:13:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:33.638 22:13:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:33.638 22:13:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:33.638 22:13:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:33.638 22:13:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.638 22:13:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.638 22:13:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:33.638 22:13:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:33.638 22:13:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:33.638 22:13:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:33.638 22:13:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:33.638 22:13:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:33.638 22:13:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.638 22:13:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:33.638 22:13:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:33.638 22:13:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:33.638 22:13:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:33.638 22:13:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:33.638 22:13:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:33.638 22:13:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:33.638 22:13:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:33.638 22:13:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:33.638 22:13:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:33.638 22:13:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:33.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:14:33.638 00:14:33.638 --- 10.0.0.2 ping statistics --- 00:14:33.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.639 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:14:33.639 22:13:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:33.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:14:33.639 00:14:33.639 --- 10.0.0.1 ping statistics --- 00:14:33.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.639 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:14:33.639 22:13:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.639 22:13:28 -- nvmf/common.sh@410 -- # return 0 00:14:33.639 22:13:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:33.639 22:13:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.639 22:13:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:33.639 22:13:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:33.639 22:13:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.639 22:13:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:33.639 22:13:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:33.639 22:13:28 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:33.639 22:13:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:33.639 22:13:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:33.639 22:13:28 -- common/autotest_common.sh@10 -- # set +x 00:14:33.639 22:13:28 -- nvmf/common.sh@469 -- # nvmfpid=3503872 00:14:33.639 22:13:28 -- nvmf/common.sh@470 -- # waitforlisten 3503872 00:14:33.639 22:13:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:33.639 22:13:28 -- common/autotest_common.sh@819 -- # '[' -z 3503872 ']' 00:14:33.639 22:13:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.639 22:13:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:33.639 22:13:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.639 22:13:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:33.639 22:13:28 -- common/autotest_common.sh@10 -- # set +x 00:14:33.639 [2024-07-24 22:13:28.586476] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:33.639 [2024-07-24 22:13:28.586520] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.639 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.639 [2024-07-24 22:13:28.644622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.639 [2024-07-24 22:13:28.682735] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:33.639 [2024-07-24 22:13:28.682852] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.639 [2024-07-24 22:13:28.682860] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.639 [2024-07-24 22:13:28.682866] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.639 [2024-07-24 22:13:28.682955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.639 [2024-07-24 22:13:28.682958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.580 22:13:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:34.580 22:13:29 -- common/autotest_common.sh@852 -- # return 0 00:14:34.580 22:13:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:34.580 22:13:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:34.580 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.580 22:13:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.580 22:13:29 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:34.580 22:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.580 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.580 [2024-07-24 22:13:29.406338] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:34.580 22:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.580 22:13:29 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:34.580 22:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.580 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.580 22:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.580 22:13:29 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:34.580 22:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.580 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.580 [2024-07-24 22:13:29.426519] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.580 22:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.580 22:13:29 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:34.580 22:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.580 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.580 NULL1 00:14:34.580 22:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.580 22:13:29 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:34.580 22:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.580 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.580 Delay0 00:14:34.580 22:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.580 22:13:29 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.580 22:13:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.580 22:13:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.580 22:13:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.580 22:13:29 -- target/delete_subsystem.sh@28 -- # perf_pid=3504122 00:14:34.580 22:13:29 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:34.580 22:13:29 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:34.580 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.580 [2024-07-24 22:13:29.507336] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:36.552 22:13:31 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.552 22:13:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.552 22:13:31 -- common/autotest_common.sh@10 -- # set +x 00:14:36.812 Write completed with error (sct=0, sc=8) 00:14:36.812 Read completed with error (sct=0, sc=8) 00:14:36.812 starting I/O failed: -6 00:14:36.812 Write completed with error (sct=0, sc=8) 00:14:36.812 Read completed with error (sct=0, sc=8) 00:14:36.812 Read completed with error (sct=0, sc=8) 00:14:36.812 Write completed with error (sct=0, sc=8) 00:14:36.812 starting I/O failed: -6 00:14:36.812 Read completed with error (sct=0, sc=8) 00:14:36.812 Read completed with error (sct=0, sc=8) 00:14:36.812 Write completed with error (sct=0, sc=8) 00:14:36.812 Read completed with error (sct=0, sc=8) 00:14:36.812 starting I/O failed: -6 00:14:36.812 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 [2024-07-24 22:13:31.758706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87e380 is same with the state(5) to be set 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 starting I/O failed: -6 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 [2024-07-24 22:13:31.760264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa584000c00 is same with the state(5) to be set 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Write completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.813 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Write completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Write completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Write completed with error (sct=0, sc=8) 00:14:36.814 Write completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Write completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Write completed with error (sct=0, sc=8) 00:14:36.814 Write completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Write completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Write completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:36.814 Read completed with error (sct=0, sc=8) 00:14:37.753 [2024-07-24 22:13:32.730807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8815e0 is same with the state(5) to be set 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 [2024-07-24 22:13:32.761152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa58400c1d0 is same with the state(5) to be set 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 [2024-07-24 22:13:32.762287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87dd50 is same with the state(5) to be set 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 [2024-07-24 22:13:32.763094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87e0d0 is same with the state(5) to be set 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 Write completed with error (sct=0, sc=8) 00:14:37.753 Read completed with error (sct=0, sc=8) 00:14:37.753 [2024-07-24 22:13:32.763253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87e630 is same with the state(5) to be set 00:14:37.753 [2024-07-24 22:13:32.763876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8815e0 (9): Bad file descriptor 00:14:37.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:37.753 22:13:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.753 22:13:32 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:37.753 22:13:32 -- target/delete_subsystem.sh@35 -- # kill -0 3504122 00:14:37.753 22:13:32 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:37.753 Initializing NVMe Controllers 00:14:37.753 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.753 Controller IO queue size 128, less than required. 00:14:37.753 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:37.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:37.753 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:37.753 Initialization complete. Launching workers. 00:14:37.753 ======================================================== 00:14:37.753 Latency(us) 00:14:37.753 Device Information : IOPS MiB/s Average min max 00:14:37.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.72 0.09 951888.67 733.86 1012551.17 00:14:37.753 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 149.97 0.07 910352.87 204.11 1999545.53 00:14:37.753 ======================================================== 00:14:37.753 Total : 336.69 0.16 933387.48 204.11 1999545.53 00:14:37.753 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@35 -- # kill -0 3504122 00:14:38.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3504122) - No such process 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@45 -- # NOT wait 3504122 00:14:38.323 22:13:33 -- common/autotest_common.sh@640 -- # local es=0 00:14:38.323 22:13:33 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3504122 00:14:38.323 22:13:33 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:38.323 22:13:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:38.323 22:13:33 -- common/autotest_common.sh@632 -- # type -t wait 00:14:38.323 22:13:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:38.323 22:13:33 -- common/autotest_common.sh@643 -- # wait 3504122 00:14:38.323 22:13:33 -- common/autotest_common.sh@643 -- # es=1 00:14:38.323 22:13:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:38.323 22:13:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:38.323 22:13:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:38.323 22:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.323 22:13:33 -- common/autotest_common.sh@10 -- # set +x 00:14:38.323 22:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.323 22:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.323 22:13:33 -- common/autotest_common.sh@10 -- # set +x 00:14:38.323 [2024-07-24 22:13:33.289421] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.323 22:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.323 22:13:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.323 22:13:33 -- common/autotest_common.sh@10 -- # set +x 00:14:38.323 22:13:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@54 -- # perf_pid=3504827 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@57 -- # kill -0 3504827 00:14:38.323 22:13:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:38.323 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.323 [2024-07-24 22:13:33.349312] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:38.891 22:13:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:38.892 22:13:33 -- target/delete_subsystem.sh@57 -- # kill -0 3504827 00:14:38.892 22:13:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:39.460 22:13:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:39.460 22:13:34 -- target/delete_subsystem.sh@57 -- # kill -0 3504827 00:14:39.460 22:13:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:39.720 22:13:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:39.720 22:13:34 -- target/delete_subsystem.sh@57 -- # kill -0 3504827 00:14:39.720 22:13:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.289 22:13:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.289 22:13:35 -- target/delete_subsystem.sh@57 -- # kill -0 3504827 00:14:40.289 22:13:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.858 22:13:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:40.858 22:13:35 -- target/delete_subsystem.sh@57 -- # kill -0 3504827 00:14:40.858 22:13:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:41.428 22:13:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.428 22:13:36 -- target/delete_subsystem.sh@57 -- # kill -0 3504827 00:14:41.428 22:13:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:41.428 Initializing NVMe Controllers 00:14:41.428 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.428 Controller IO queue size 128, less than required. 00:14:41.428 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:41.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:41.428 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:41.428 Initialization complete. Launching workers. 00:14:41.428 ======================================================== 00:14:41.428 Latency(us) 00:14:41.428 Device Information : IOPS MiB/s Average min max 00:14:41.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003766.88 1000444.41 1011836.09 00:14:41.428 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005056.32 1000678.29 1012011.56 00:14:41.428 ======================================================== 00:14:41.428 Total : 256.00 0.12 1004411.60 1000444.41 1012011.56 00:14:41.428 00:14:41.998 22:13:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.998 22:13:36 -- target/delete_subsystem.sh@57 -- # kill -0 3504827 00:14:41.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3504827) - No such process 00:14:41.998 22:13:36 -- target/delete_subsystem.sh@67 -- # wait 3504827 00:14:41.998 22:13:36 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:41.998 22:13:36 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:41.998 22:13:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.998 22:13:36 -- nvmf/common.sh@116 -- # sync 00:14:41.998 22:13:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.998 22:13:36 -- nvmf/common.sh@119 -- # set +e 00:14:41.998 22:13:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.998 22:13:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.998 rmmod nvme_tcp 00:14:41.998 rmmod nvme_fabrics 00:14:41.998 rmmod nvme_keyring 00:14:41.998 22:13:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:41.998 22:13:36 -- nvmf/common.sh@123 -- # set -e 00:14:41.998 22:13:36 -- nvmf/common.sh@124 -- # return 0 00:14:41.998 22:13:36 -- nvmf/common.sh@477 -- # '[' -n 3503872 ']' 00:14:41.998 22:13:36 -- nvmf/common.sh@478 -- # killprocess 3503872 00:14:41.998 22:13:36 -- common/autotest_common.sh@926 -- # '[' -z 3503872 ']' 00:14:41.998 22:13:36 -- common/autotest_common.sh@930 -- # kill -0 3503872 00:14:41.998 22:13:36 -- common/autotest_common.sh@931 -- # uname 00:14:41.998 22:13:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.998 22:13:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3503872 00:14:41.998 22:13:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:41.998 22:13:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:41.998 22:13:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3503872' 00:14:41.998 killing process with pid 3503872 00:14:41.998 22:13:36 -- common/autotest_common.sh@945 -- # kill 3503872 00:14:41.998 22:13:36 -- common/autotest_common.sh@950 -- # wait 3503872 00:14:41.998 22:13:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:41.998 22:13:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:41.998 22:13:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:41.998 22:13:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.998 22:13:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:42.258 22:13:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.258 22:13:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.258 22:13:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.167 22:13:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:44.167 00:14:44.167 real 0m16.288s 00:14:44.167 user 0m30.621s 00:14:44.167 sys 0m5.020s 00:14:44.167 22:13:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.167 22:13:39 -- common/autotest_common.sh@10 -- # set +x 00:14:44.167 ************************************ 00:14:44.167 END TEST nvmf_delete_subsystem 00:14:44.167 ************************************ 00:14:44.167 22:13:39 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:14:44.167 22:13:39 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:44.167 22:13:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:44.167 22:13:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.167 22:13:39 -- common/autotest_common.sh@10 -- # set +x 00:14:44.167 ************************************ 00:14:44.167 START TEST nvmf_nvme_cli 00:14:44.167 ************************************ 00:14:44.167 22:13:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:44.427 * Looking for test storage... 00:14:44.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.427 22:13:39 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.427 22:13:39 -- nvmf/common.sh@7 -- # uname -s 00:14:44.427 22:13:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.427 22:13:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.427 22:13:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.427 22:13:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.427 22:13:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.427 22:13:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.427 22:13:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.427 22:13:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.427 22:13:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.427 22:13:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.427 22:13:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:44.427 22:13:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:44.427 22:13:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.427 22:13:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.427 22:13:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.427 22:13:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.427 22:13:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.427 22:13:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.427 22:13:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.427 22:13:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.427 22:13:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.427 22:13:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.427 22:13:39 -- paths/export.sh@5 -- # export PATH 00:14:44.427 22:13:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.427 22:13:39 -- nvmf/common.sh@46 -- # : 0 00:14:44.427 22:13:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:44.427 22:13:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:44.427 22:13:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:44.427 22:13:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.427 22:13:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.427 22:13:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:44.427 22:13:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:44.427 22:13:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:44.427 22:13:39 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:44.427 22:13:39 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:44.427 22:13:39 -- target/nvme_cli.sh@14 -- # devs=() 00:14:44.427 22:13:39 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:44.427 22:13:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:44.427 22:13:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.427 22:13:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:44.427 22:13:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:44.427 22:13:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:44.427 22:13:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.427 22:13:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.427 22:13:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.427 22:13:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:44.427 22:13:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:44.427 22:13:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:44.427 22:13:39 -- common/autotest_common.sh@10 -- # set +x 00:14:49.709 22:13:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:49.709 22:13:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:49.709 22:13:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:49.709 22:13:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:49.709 22:13:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:49.709 22:13:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:49.709 22:13:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:49.709 22:13:44 -- nvmf/common.sh@294 -- # net_devs=() 00:14:49.709 22:13:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:49.709 22:13:44 -- nvmf/common.sh@295 -- # e810=() 00:14:49.709 22:13:44 -- nvmf/common.sh@295 -- # local -ga e810 00:14:49.709 22:13:44 -- nvmf/common.sh@296 -- # x722=() 00:14:49.709 22:13:44 -- nvmf/common.sh@296 -- # local -ga x722 00:14:49.709 22:13:44 -- nvmf/common.sh@297 -- # mlx=() 00:14:49.709 22:13:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:49.709 22:13:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.709 22:13:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:49.709 22:13:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:49.709 22:13:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:49.709 22:13:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:49.709 22:13:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:49.709 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:49.709 22:13:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:49.709 22:13:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:49.709 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:49.709 22:13:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:49.709 22:13:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:49.709 22:13:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.709 22:13:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:49.709 22:13:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.709 22:13:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:49.709 Found net devices under 0000:86:00.0: cvl_0_0 00:14:49.709 22:13:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.709 22:13:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:49.709 22:13:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.709 22:13:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:49.709 22:13:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.709 22:13:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:49.709 Found net devices under 0000:86:00.1: cvl_0_1 00:14:49.709 22:13:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.709 22:13:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:49.709 22:13:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:49.709 22:13:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:49.709 22:13:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:49.709 22:13:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.709 22:13:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.709 22:13:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.709 22:13:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:49.709 22:13:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.709 22:13:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.709 22:13:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:49.709 22:13:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.709 22:13:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.709 22:13:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:49.709 22:13:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:49.709 22:13:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.709 22:13:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.969 22:13:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.969 22:13:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.969 22:13:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:49.969 22:13:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.969 22:13:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.969 22:13:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.970 22:13:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:49.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:14:49.970 00:14:49.970 --- 10.0.0.2 ping statistics --- 00:14:49.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.970 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:14:49.970 22:13:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:14:49.970 00:14:49.970 --- 10.0.0.1 ping statistics --- 00:14:49.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.970 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:14:49.970 22:13:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.970 22:13:45 -- nvmf/common.sh@410 -- # return 0 00:14:49.970 22:13:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:49.970 22:13:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.970 22:13:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:49.970 22:13:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:49.970 22:13:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.970 22:13:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:49.970 22:13:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:49.970 22:13:45 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:49.970 22:13:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:49.970 22:13:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:49.970 22:13:45 -- common/autotest_common.sh@10 -- # set +x 00:14:49.970 22:13:45 -- nvmf/common.sh@469 -- # nvmfpid=3508846 00:14:49.970 22:13:45 -- nvmf/common.sh@470 -- # waitforlisten 3508846 00:14:49.970 22:13:45 -- common/autotest_common.sh@819 -- # '[' -z 3508846 ']' 00:14:49.970 22:13:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.970 22:13:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:49.970 22:13:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.970 22:13:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:49.970 22:13:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.970 22:13:45 -- common/autotest_common.sh@10 -- # set +x 00:14:49.970 [2024-07-24 22:13:45.086034] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:49.970 [2024-07-24 22:13:45.086085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.230 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.230 [2024-07-24 22:13:45.144362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.230 [2024-07-24 22:13:45.184843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:50.230 [2024-07-24 22:13:45.184955] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.230 [2024-07-24 22:13:45.184963] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.230 [2024-07-24 22:13:45.184969] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.230 [2024-07-24 22:13:45.185004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.230 [2024-07-24 22:13:45.185129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.230 [2024-07-24 22:13:45.185151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.230 [2024-07-24 22:13:45.185152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.799 22:13:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:50.799 22:13:45 -- common/autotest_common.sh@852 -- # return 0 00:14:50.799 22:13:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:50.799 22:13:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:50.799 22:13:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.799 22:13:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.799 22:13:45 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.799 22:13:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.799 22:13:45 -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 [2024-07-24 22:13:45.935492] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.059 22:13:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.059 22:13:45 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:51.059 22:13:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.059 22:13:45 -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 Malloc0 00:14:51.059 22:13:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.059 22:13:45 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:51.059 22:13:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.059 22:13:45 -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 Malloc1 00:14:51.059 22:13:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.059 22:13:45 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:51.059 22:13:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.059 22:13:45 -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 22:13:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.059 22:13:45 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:51.059 22:13:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.059 22:13:45 -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 22:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.059 22:13:46 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:51.059 22:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.059 22:13:46 -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 22:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.059 22:13:46 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.059 22:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.059 22:13:46 -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 [2024-07-24 22:13:46.016691] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.059 22:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.059 22:13:46 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:51.059 22:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:51.059 22:13:46 -- common/autotest_common.sh@10 -- # set +x 00:14:51.059 22:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:51.059 22:13:46 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:51.059 00:14:51.059 Discovery Log Number of Records 2, Generation counter 2 00:14:51.059 =====Discovery Log Entry 0====== 00:14:51.059 trtype: tcp 00:14:51.059 adrfam: ipv4 00:14:51.059 subtype: current discovery subsystem 00:14:51.059 treq: not required 00:14:51.059 portid: 0 00:14:51.059 trsvcid: 4420 00:14:51.059 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:51.059 traddr: 10.0.0.2 00:14:51.059 eflags: explicit discovery connections, duplicate discovery information 00:14:51.059 sectype: none 00:14:51.059 =====Discovery Log Entry 1====== 00:14:51.059 trtype: tcp 00:14:51.059 adrfam: ipv4 00:14:51.059 subtype: nvme subsystem 00:14:51.059 treq: not required 00:14:51.059 portid: 0 00:14:51.059 trsvcid: 4420 00:14:51.059 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:51.059 traddr: 10.0.0.2 00:14:51.059 eflags: none 00:14:51.059 sectype: none 00:14:51.059 22:13:46 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:51.059 22:13:46 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:51.059 22:13:46 -- nvmf/common.sh@510 -- # local dev _ 00:14:51.059 22:13:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:51.059 22:13:46 -- nvmf/common.sh@509 -- # nvme list 00:14:51.059 22:13:46 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:51.059 22:13:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:51.059 22:13:46 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:51.059 22:13:46 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:51.060 22:13:46 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:51.060 22:13:46 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:52.441 22:13:47 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:52.441 22:13:47 -- common/autotest_common.sh@1177 -- # local i=0 00:14:52.441 22:13:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.441 22:13:47 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:14:52.441 22:13:47 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:14:52.441 22:13:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:54.367 22:13:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:54.367 22:13:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:54.367 22:13:49 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.367 22:13:49 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:14:54.367 22:13:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.367 22:13:49 -- common/autotest_common.sh@1187 -- # return 0 00:14:54.367 22:13:49 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:54.367 22:13:49 -- nvmf/common.sh@510 -- # local dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@509 -- # nvme list 00:14:54.367 22:13:49 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:54.367 22:13:49 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:54.367 22:13:49 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:54.367 /dev/nvme0n1 ]] 00:14:54.367 22:13:49 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:54.367 22:13:49 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:54.367 22:13:49 -- nvmf/common.sh@510 -- # local dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@509 -- # nvme list 00:14:54.367 22:13:49 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:54.367 22:13:49 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:54.367 22:13:49 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:54.367 22:13:49 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:54.367 22:13:49 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:54.367 22:13:49 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.367 22:13:49 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.367 22:13:49 -- common/autotest_common.sh@1198 -- # local i=0 00:14:54.367 22:13:49 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:54.367 22:13:49 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.367 22:13:49 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:54.367 22:13:49 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.367 22:13:49 -- common/autotest_common.sh@1210 -- # return 0 00:14:54.367 22:13:49 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:54.367 22:13:49 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.367 22:13:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:54.367 22:13:49 -- common/autotest_common.sh@10 -- # set +x 00:14:54.636 22:13:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:54.636 22:13:49 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:54.637 22:13:49 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:54.637 22:13:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:54.637 22:13:49 -- nvmf/common.sh@116 -- # sync 00:14:54.637 22:13:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:54.637 22:13:49 -- nvmf/common.sh@119 -- # set +e 00:14:54.637 22:13:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:54.637 22:13:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:54.637 rmmod nvme_tcp 00:14:54.637 rmmod nvme_fabrics 00:14:54.637 rmmod nvme_keyring 00:14:54.637 22:13:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:54.637 22:13:49 -- nvmf/common.sh@123 -- # set -e 00:14:54.637 22:13:49 -- nvmf/common.sh@124 -- # return 0 00:14:54.637 22:13:49 -- nvmf/common.sh@477 -- # '[' -n 3508846 ']' 00:14:54.637 22:13:49 -- nvmf/common.sh@478 -- # killprocess 3508846 00:14:54.637 22:13:49 -- common/autotest_common.sh@926 -- # '[' -z 3508846 ']' 00:14:54.637 22:13:49 -- common/autotest_common.sh@930 -- # kill -0 3508846 00:14:54.637 22:13:49 -- common/autotest_common.sh@931 -- # uname 00:14:54.637 22:13:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:54.637 22:13:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3508846 00:14:54.637 22:13:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:54.637 22:13:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:54.637 22:13:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3508846' 00:14:54.637 killing process with pid 3508846 00:14:54.637 22:13:49 -- common/autotest_common.sh@945 -- # kill 3508846 00:14:54.637 22:13:49 -- common/autotest_common.sh@950 -- # wait 3508846 00:14:54.897 22:13:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:54.897 22:13:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:54.897 22:13:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:54.897 22:13:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.897 22:13:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:54.897 22:13:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.897 22:13:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.897 22:13:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.806 22:13:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:56.806 00:14:56.806 real 0m12.662s 00:14:56.806 user 0m20.079s 00:14:56.806 sys 0m4.815s 00:14:56.806 22:13:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.806 22:13:51 -- common/autotest_common.sh@10 -- # set +x 00:14:56.806 ************************************ 00:14:56.806 END TEST nvmf_nvme_cli 00:14:56.806 ************************************ 00:14:56.806 22:13:51 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:14:56.806 22:13:51 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.806 22:13:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:56.807 22:13:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.807 22:13:51 -- common/autotest_common.sh@10 -- # set +x 00:14:56.807 ************************************ 00:14:56.807 START TEST nvmf_vfio_user 00:14:56.807 ************************************ 00:14:56.807 22:13:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:57.067 * Looking for test storage... 00:14:57.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.067 22:13:52 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.067 22:13:52 -- nvmf/common.sh@7 -- # uname -s 00:14:57.067 22:13:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.067 22:13:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.067 22:13:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.067 22:13:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.067 22:13:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.067 22:13:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.067 22:13:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.067 22:13:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.067 22:13:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.067 22:13:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.067 22:13:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:57.067 22:13:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:57.067 22:13:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.067 22:13:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.067 22:13:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.067 22:13:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.067 22:13:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.067 22:13:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.067 22:13:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.067 22:13:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.067 22:13:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.067 22:13:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.067 22:13:52 -- paths/export.sh@5 -- # export PATH 00:14:57.067 22:13:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.067 22:13:52 -- nvmf/common.sh@46 -- # : 0 00:14:57.067 22:13:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:57.067 22:13:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:57.067 22:13:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:57.067 22:13:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.067 22:13:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.067 22:13:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:57.067 22:13:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:57.067 22:13:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:57.067 22:13:52 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:57.067 22:13:52 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:57.067 22:13:52 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:57.067 22:13:52 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.067 22:13:52 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:57.067 22:13:52 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:57.068 22:13:52 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:57.068 22:13:52 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:57.068 22:13:52 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:57.068 22:13:52 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:57.068 22:13:52 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3510143 00:14:57.068 22:13:52 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3510143' 00:14:57.068 Process pid: 3510143 00:14:57.068 22:13:52 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:57.068 22:13:52 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3510143 00:14:57.068 22:13:52 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:57.068 22:13:52 -- common/autotest_common.sh@819 -- # '[' -z 3510143 ']' 00:14:57.068 22:13:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.068 22:13:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.068 22:13:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.068 22:13:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.068 22:13:52 -- common/autotest_common.sh@10 -- # set +x 00:14:57.068 [2024-07-24 22:13:52.102943] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:14:57.068 [2024-07-24 22:13:52.102991] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.068 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.068 [2024-07-24 22:13:52.158400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.068 [2024-07-24 22:13:52.199887] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.068 [2024-07-24 22:13:52.200011] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.068 [2024-07-24 22:13:52.200024] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.068 [2024-07-24 22:13:52.200032] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.068 [2024-07-24 22:13:52.200073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.068 [2024-07-24 22:13:52.200107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.068 [2024-07-24 22:13:52.200203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.068 [2024-07-24 22:13:52.200207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.008 22:13:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.008 22:13:52 -- common/autotest_common.sh@852 -- # return 0 00:14:58.008 22:13:52 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.947 22:13:53 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:59.207 22:13:54 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:59.207 22:13:54 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:59.207 22:13:54 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.207 22:13:54 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:59.207 22:13:54 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:59.207 Malloc1 00:14:59.207 22:13:54 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:59.467 22:13:54 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:59.726 22:13:54 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:59.726 22:13:54 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.726 22:13:54 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:59.726 22:13:54 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.987 Malloc2 00:14:59.987 22:13:55 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:00.246 22:13:55 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:00.246 22:13:55 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:00.508 22:13:55 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:00.508 22:13:55 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:00.508 22:13:55 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.508 22:13:55 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:00.508 22:13:55 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:00.508 22:13:55 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:00.508 [2024-07-24 22:13:55.563023] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:00.508 [2024-07-24 22:13:55.563077] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510779 ] 00:15:00.508 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.508 [2024-07-24 22:13:55.593453] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:00.508 [2024-07-24 22:13:55.595895] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.508 [2024-07-24 22:13:55.595916] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f295d0db000 00:15:00.508 [2024-07-24 22:13:55.596898] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.508 [2024-07-24 22:13:55.597898] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.508 [2024-07-24 22:13:55.598902] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.508 [2024-07-24 22:13:55.599906] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.508 [2024-07-24 22:13:55.600917] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.508 [2024-07-24 22:13:55.601922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.508 [2024-07-24 22:13:55.602927] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.508 [2024-07-24 22:13:55.603935] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.508 [2024-07-24 22:13:55.604944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.508 [2024-07-24 22:13:55.604953] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f295bea2000 00:15:00.508 [2024-07-24 22:13:55.606010] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.508 [2024-07-24 22:13:55.620279] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:00.508 [2024-07-24 22:13:55.620305] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:00.508 [2024-07-24 22:13:55.625101] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.508 [2024-07-24 22:13:55.625139] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:00.508 [2024-07-24 22:13:55.625213] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:00.508 [2024-07-24 22:13:55.625232] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:00.508 [2024-07-24 22:13:55.625237] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:00.508 [2024-07-24 22:13:55.626101] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:00.508 [2024-07-24 22:13:55.626110] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:00.508 [2024-07-24 22:13:55.626116] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:00.508 [2024-07-24 22:13:55.627100] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.508 [2024-07-24 22:13:55.627109] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:00.508 [2024-07-24 22:13:55.627115] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:00.508 [2024-07-24 22:13:55.628110] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:00.508 [2024-07-24 22:13:55.628117] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:00.508 [2024-07-24 22:13:55.629118] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:00.508 [2024-07-24 22:13:55.629126] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:00.508 [2024-07-24 22:13:55.629131] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:00.508 [2024-07-24 22:13:55.629136] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:00.508 [2024-07-24 22:13:55.629241] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:00.508 [2024-07-24 22:13:55.629246] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:00.508 [2024-07-24 22:13:55.629250] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:00.508 [2024-07-24 22:13:55.630123] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:00.508 [2024-07-24 22:13:55.631130] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:00.508 [2024-07-24 22:13:55.632140] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.508 [2024-07-24 22:13:55.633168] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:00.508 [2024-07-24 22:13:55.634154] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:00.508 [2024-07-24 22:13:55.634162] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:00.508 [2024-07-24 22:13:55.634166] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:00.508 [2024-07-24 22:13:55.634183] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:00.508 [2024-07-24 22:13:55.634190] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:00.508 [2024-07-24 22:13:55.634203] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.508 [2024-07-24 22:13:55.634207] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.508 [2024-07-24 22:13:55.634220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.508 [2024-07-24 22:13:55.634260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:00.508 [2024-07-24 22:13:55.634268] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:00.508 [2024-07-24 22:13:55.634273] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:00.508 [2024-07-24 22:13:55.634276] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:00.508 [2024-07-24 22:13:55.634281] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:00.508 [2024-07-24 22:13:55.634285] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:00.508 [2024-07-24 22:13:55.634289] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:00.508 [2024-07-24 22:13:55.634293] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634301] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.509 [2024-07-24 22:13:55.634346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.509 [2024-07-24 22:13:55.634353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.509 [2024-07-24 22:13:55.634360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.509 [2024-07-24 22:13:55.634364] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634372] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634395] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:00.509 [2024-07-24 22:13:55.634399] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634405] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634414] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634484] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634490] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634497] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:00.509 [2024-07-24 22:13:55.634501] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:00.509 [2024-07-24 22:13:55.634506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634530] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:00.509 [2024-07-24 22:13:55.634541] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634547] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634553] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.509 [2024-07-24 22:13:55.634557] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.509 [2024-07-24 22:13:55.634562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634593] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634599] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634605] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.509 [2024-07-24 22:13:55.634609] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.509 [2024-07-24 22:13:55.634614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634636] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634641] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634648] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634654] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634658] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634663] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:00.509 [2024-07-24 22:13:55.634667] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:00.509 [2024-07-24 22:13:55.634671] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:00.509 [2024-07-24 22:13:55.634687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634734] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634773] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:00.509 [2024-07-24 22:13:55.634777] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:00.509 [2024-07-24 22:13:55.634780] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:00.509 [2024-07-24 22:13:55.634783] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:00.509 [2024-07-24 22:13:55.634789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:00.509 [2024-07-24 22:13:55.634796] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:00.509 [2024-07-24 22:13:55.634799] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:00.509 [2024-07-24 22:13:55.634805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634811] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:00.509 [2024-07-24 22:13:55.634816] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.509 [2024-07-24 22:13:55.634822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634828] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:00.509 [2024-07-24 22:13:55.634832] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:00.509 [2024-07-24 22:13:55.634837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:00.509 [2024-07-24 22:13:55.634843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:00.509 [2024-07-24 22:13:55.634869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:00.509 ===================================================== 00:15:00.509 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.509 ===================================================== 00:15:00.509 Controller Capabilities/Features 00:15:00.509 ================================ 00:15:00.509 Vendor ID: 4e58 00:15:00.509 Subsystem Vendor ID: 4e58 00:15:00.509 Serial Number: SPDK1 00:15:00.509 Model Number: SPDK bdev Controller 00:15:00.509 Firmware Version: 24.01.1 00:15:00.509 Recommended Arb Burst: 6 00:15:00.509 IEEE OUI Identifier: 8d 6b 50 00:15:00.509 Multi-path I/O 00:15:00.509 May have multiple subsystem ports: Yes 00:15:00.509 May have multiple controllers: Yes 00:15:00.509 Associated with SR-IOV VF: No 00:15:00.509 Max Data Transfer Size: 131072 00:15:00.509 Max Number of Namespaces: 32 00:15:00.509 Max Number of I/O Queues: 127 00:15:00.509 NVMe Specification Version (VS): 1.3 00:15:00.509 NVMe Specification Version (Identify): 1.3 00:15:00.509 Maximum Queue Entries: 256 00:15:00.509 Contiguous Queues Required: Yes 00:15:00.509 Arbitration Mechanisms Supported 00:15:00.510 Weighted Round Robin: Not Supported 00:15:00.510 Vendor Specific: Not Supported 00:15:00.510 Reset Timeout: 15000 ms 00:15:00.510 Doorbell Stride: 4 bytes 00:15:00.510 NVM Subsystem Reset: Not Supported 00:15:00.510 Command Sets Supported 00:15:00.510 NVM Command Set: Supported 00:15:00.510 Boot Partition: Not Supported 00:15:00.510 Memory Page Size Minimum: 4096 bytes 00:15:00.510 Memory Page Size Maximum: 4096 bytes 00:15:00.510 Persistent Memory Region: Not Supported 00:15:00.510 Optional Asynchronous Events Supported 00:15:00.510 Namespace Attribute Notices: Supported 00:15:00.510 Firmware Activation Notices: Not Supported 00:15:00.510 ANA Change Notices: Not Supported 00:15:00.510 PLE Aggregate Log Change Notices: Not Supported 00:15:00.510 LBA Status Info Alert Notices: Not Supported 00:15:00.510 EGE Aggregate Log Change Notices: Not Supported 00:15:00.510 Normal NVM Subsystem Shutdown event: Not Supported 00:15:00.510 Zone Descriptor Change Notices: Not Supported 00:15:00.510 Discovery Log Change Notices: Not Supported 00:15:00.510 Controller Attributes 00:15:00.510 128-bit Host Identifier: Supported 00:15:00.510 Non-Operational Permissive Mode: Not Supported 00:15:00.510 NVM Sets: Not Supported 00:15:00.510 Read Recovery Levels: Not Supported 00:15:00.510 Endurance Groups: Not Supported 00:15:00.510 Predictable Latency Mode: Not Supported 00:15:00.510 Traffic Based Keep ALive: Not Supported 00:15:00.510 Namespace Granularity: Not Supported 00:15:00.510 SQ Associations: Not Supported 00:15:00.510 UUID List: Not Supported 00:15:00.510 Multi-Domain Subsystem: Not Supported 00:15:00.510 Fixed Capacity Management: Not Supported 00:15:00.510 Variable Capacity Management: Not Supported 00:15:00.510 Delete Endurance Group: Not Supported 00:15:00.510 Delete NVM Set: Not Supported 00:15:00.510 Extended LBA Formats Supported: Not Supported 00:15:00.510 Flexible Data Placement Supported: Not Supported 00:15:00.510 00:15:00.510 Controller Memory Buffer Support 00:15:00.510 ================================ 00:15:00.510 Supported: No 00:15:00.510 00:15:00.510 Persistent Memory Region Support 00:15:00.510 ================================ 00:15:00.510 Supported: No 00:15:00.510 00:15:00.510 Admin Command Set Attributes 00:15:00.510 ============================ 00:15:00.510 Security Send/Receive: Not Supported 00:15:00.510 Format NVM: Not Supported 00:15:00.510 Firmware Activate/Download: Not Supported 00:15:00.510 Namespace Management: Not Supported 00:15:00.510 Device Self-Test: Not Supported 00:15:00.510 Directives: Not Supported 00:15:00.510 NVMe-MI: Not Supported 00:15:00.510 Virtualization Management: Not Supported 00:15:00.510 Doorbell Buffer Config: Not Supported 00:15:00.510 Get LBA Status Capability: Not Supported 00:15:00.510 Command & Feature Lockdown Capability: Not Supported 00:15:00.510 Abort Command Limit: 4 00:15:00.510 Async Event Request Limit: 4 00:15:00.510 Number of Firmware Slots: N/A 00:15:00.510 Firmware Slot 1 Read-Only: N/A 00:15:00.510 Firmware Activation Without Reset: N/A 00:15:00.510 Multiple Update Detection Support: N/A 00:15:00.510 Firmware Update Granularity: No Information Provided 00:15:00.510 Per-Namespace SMART Log: No 00:15:00.510 Asymmetric Namespace Access Log Page: Not Supported 00:15:00.510 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:00.510 Command Effects Log Page: Supported 00:15:00.510 Get Log Page Extended Data: Supported 00:15:00.510 Telemetry Log Pages: Not Supported 00:15:00.510 Persistent Event Log Pages: Not Supported 00:15:00.510 Supported Log Pages Log Page: May Support 00:15:00.510 Commands Supported & Effects Log Page: Not Supported 00:15:00.510 Feature Identifiers & Effects Log Page:May Support 00:15:00.510 NVMe-MI Commands & Effects Log Page: May Support 00:15:00.510 Data Area 4 for Telemetry Log: Not Supported 00:15:00.510 Error Log Page Entries Supported: 128 00:15:00.510 Keep Alive: Supported 00:15:00.510 Keep Alive Granularity: 10000 ms 00:15:00.510 00:15:00.510 NVM Command Set Attributes 00:15:00.510 ========================== 00:15:00.510 Submission Queue Entry Size 00:15:00.510 Max: 64 00:15:00.510 Min: 64 00:15:00.510 Completion Queue Entry Size 00:15:00.510 Max: 16 00:15:00.510 Min: 16 00:15:00.510 Number of Namespaces: 32 00:15:00.510 Compare Command: Supported 00:15:00.510 Write Uncorrectable Command: Not Supported 00:15:00.510 Dataset Management Command: Supported 00:15:00.510 Write Zeroes Command: Supported 00:15:00.510 Set Features Save Field: Not Supported 00:15:00.510 Reservations: Not Supported 00:15:00.510 Timestamp: Not Supported 00:15:00.510 Copy: Supported 00:15:00.510 Volatile Write Cache: Present 00:15:00.510 Atomic Write Unit (Normal): 1 00:15:00.510 Atomic Write Unit (PFail): 1 00:15:00.510 Atomic Compare & Write Unit: 1 00:15:00.510 Fused Compare & Write: Supported 00:15:00.510 Scatter-Gather List 00:15:00.510 SGL Command Set: Supported (Dword aligned) 00:15:00.510 SGL Keyed: Not Supported 00:15:00.510 SGL Bit Bucket Descriptor: Not Supported 00:15:00.510 SGL Metadata Pointer: Not Supported 00:15:00.510 Oversized SGL: Not Supported 00:15:00.510 SGL Metadata Address: Not Supported 00:15:00.510 SGL Offset: Not Supported 00:15:00.510 Transport SGL Data Block: Not Supported 00:15:00.510 Replay Protected Memory Block: Not Supported 00:15:00.510 00:15:00.510 Firmware Slot Information 00:15:00.510 ========================= 00:15:00.510 Active slot: 1 00:15:00.510 Slot 1 Firmware Revision: 24.01.1 00:15:00.510 00:15:00.510 00:15:00.510 Commands Supported and Effects 00:15:00.510 ============================== 00:15:00.510 Admin Commands 00:15:00.510 -------------- 00:15:00.510 Get Log Page (02h): Supported 00:15:00.510 Identify (06h): Supported 00:15:00.510 Abort (08h): Supported 00:15:00.510 Set Features (09h): Supported 00:15:00.510 Get Features (0Ah): Supported 00:15:00.510 Asynchronous Event Request (0Ch): Supported 00:15:00.510 Keep Alive (18h): Supported 00:15:00.510 I/O Commands 00:15:00.510 ------------ 00:15:00.510 Flush (00h): Supported LBA-Change 00:15:00.510 Write (01h): Supported LBA-Change 00:15:00.510 Read (02h): Supported 00:15:00.510 Compare (05h): Supported 00:15:00.510 Write Zeroes (08h): Supported LBA-Change 00:15:00.510 Dataset Management (09h): Supported LBA-Change 00:15:00.510 Copy (19h): Supported LBA-Change 00:15:00.510 Unknown (79h): Supported LBA-Change 00:15:00.510 Unknown (7Ah): Supported 00:15:00.510 00:15:00.510 Error Log 00:15:00.510 ========= 00:15:00.510 00:15:00.510 Arbitration 00:15:00.510 =========== 00:15:00.510 Arbitration Burst: 1 00:15:00.510 00:15:00.510 Power Management 00:15:00.510 ================ 00:15:00.510 Number of Power States: 1 00:15:00.510 Current Power State: Power State #0 00:15:00.510 Power State #0: 00:15:00.510 Max Power: 0.00 W 00:15:00.510 Non-Operational State: Operational 00:15:00.510 Entry Latency: Not Reported 00:15:00.510 Exit Latency: Not Reported 00:15:00.510 Relative Read Throughput: 0 00:15:00.510 Relative Read Latency: 0 00:15:00.510 Relative Write Throughput: 0 00:15:00.510 Relative Write Latency: 0 00:15:00.510 Idle Power: Not Reported 00:15:00.510 Active Power: Not Reported 00:15:00.510 Non-Operational Permissive Mode: Not Supported 00:15:00.510 00:15:00.510 Health Information 00:15:00.510 ================== 00:15:00.510 Critical Warnings: 00:15:00.510 Available Spare Space: OK 00:15:00.510 Temperature: OK 00:15:00.510 Device Reliability: OK 00:15:00.510 Read Only: No 00:15:00.510 Volatile Memory Backup: OK 00:15:00.510 Current Temperature: 0 Kelvin[2024-07-24 22:13:55.634960] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:00.510 [2024-07-24 22:13:55.634967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:00.510 [2024-07-24 22:13:55.634991] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:00.510 [2024-07-24 22:13:55.634999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.510 [2024-07-24 22:13:55.635005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.510 [2024-07-24 22:13:55.635010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.510 [2024-07-24 22:13:55.635016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.511 [2024-07-24 22:13:55.636055] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.511 [2024-07-24 22:13:55.636065] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:00.511 [2024-07-24 22:13:55.636185] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:00.511 [2024-07-24 22:13:55.636190] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:00.511 [2024-07-24 22:13:55.637165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:00.511 [2024-07-24 22:13:55.637175] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:00.511 [2024-07-24 22:13:55.637225] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:00.770 [2024-07-24 22:13:55.641052] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.770 (-273 Celsius) 00:15:00.770 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:00.770 Available Spare: 0% 00:15:00.770 Available Spare Threshold: 0% 00:15:00.770 Life Percentage Used: 0% 00:15:00.770 Data Units Read: 0 00:15:00.770 Data Units Written: 0 00:15:00.770 Host Read Commands: 0 00:15:00.770 Host Write Commands: 0 00:15:00.770 Controller Busy Time: 0 minutes 00:15:00.770 Power Cycles: 0 00:15:00.770 Power On Hours: 0 hours 00:15:00.770 Unsafe Shutdowns: 0 00:15:00.770 Unrecoverable Media Errors: 0 00:15:00.770 Lifetime Error Log Entries: 0 00:15:00.770 Warning Temperature Time: 0 minutes 00:15:00.770 Critical Temperature Time: 0 minutes 00:15:00.770 00:15:00.770 Number of Queues 00:15:00.770 ================ 00:15:00.770 Number of I/O Submission Queues: 127 00:15:00.770 Number of I/O Completion Queues: 127 00:15:00.770 00:15:00.770 Active Namespaces 00:15:00.770 ================= 00:15:00.770 Namespace ID:1 00:15:00.770 Error Recovery Timeout: Unlimited 00:15:00.770 Command Set Identifier: NVM (00h) 00:15:00.770 Deallocate: Supported 00:15:00.771 Deallocated/Unwritten Error: Not Supported 00:15:00.771 Deallocated Read Value: Unknown 00:15:00.771 Deallocate in Write Zeroes: Not Supported 00:15:00.771 Deallocated Guard Field: 0xFFFF 00:15:00.771 Flush: Supported 00:15:00.771 Reservation: Supported 00:15:00.771 Namespace Sharing Capabilities: Multiple Controllers 00:15:00.771 Size (in LBAs): 131072 (0GiB) 00:15:00.771 Capacity (in LBAs): 131072 (0GiB) 00:15:00.771 Utilization (in LBAs): 131072 (0GiB) 00:15:00.771 NGUID: FCBC49ED32684D4AB74CE8B071FB53C8 00:15:00.771 UUID: fcbc49ed-3268-4d4a-b74c-e8b071fb53c8 00:15:00.771 Thin Provisioning: Not Supported 00:15:00.771 Per-NS Atomic Units: Yes 00:15:00.771 Atomic Boundary Size (Normal): 0 00:15:00.771 Atomic Boundary Size (PFail): 0 00:15:00.771 Atomic Boundary Offset: 0 00:15:00.771 Maximum Single Source Range Length: 65535 00:15:00.771 Maximum Copy Length: 65535 00:15:00.771 Maximum Source Range Count: 1 00:15:00.771 NGUID/EUI64 Never Reused: No 00:15:00.771 Namespace Write Protected: No 00:15:00.771 Number of LBA Formats: 1 00:15:00.771 Current LBA Format: LBA Format #00 00:15:00.771 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:00.771 00:15:00.771 22:13:55 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:00.771 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.052 Initializing NVMe Controllers 00:15:06.052 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.052 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:06.052 Initialization complete. Launching workers. 00:15:06.052 ======================================================== 00:15:06.052 Latency(us) 00:15:06.052 Device Information : IOPS MiB/s Average min max 00:15:06.052 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39874.76 155.76 3211.65 985.31 7575.53 00:15:06.052 ======================================================== 00:15:06.052 Total : 39874.76 155.76 3211.65 985.31 7575.53 00:15:06.052 00:15:06.052 22:14:00 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:06.052 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.336 Initializing NVMe Controllers 00:15:11.336 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.336 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:11.336 Initialization complete. Launching workers. 00:15:11.336 ======================================================== 00:15:11.336 Latency(us) 00:15:11.336 Device Information : IOPS MiB/s Average min max 00:15:11.336 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.09 62.70 7979.88 5979.70 9977.97 00:15:11.336 ======================================================== 00:15:11.336 Total : 16051.09 62.70 7979.88 5979.70 9977.97 00:15:11.336 00:15:11.336 22:14:06 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:11.336 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.620 Initializing NVMe Controllers 00:15:16.620 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.620 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.620 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:16.620 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:16.620 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:16.620 Initialization complete. Launching workers. 00:15:16.620 Starting thread on core 2 00:15:16.620 Starting thread on core 3 00:15:16.620 Starting thread on core 1 00:15:16.620 22:14:11 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:16.620 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.045 Initializing NVMe Controllers 00:15:20.045 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.045 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.045 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:20.045 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:20.045 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:20.045 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:20.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:20.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:20.045 Initialization complete. Launching workers. 00:15:20.045 Starting thread on core 1 with urgent priority queue 00:15:20.045 Starting thread on core 2 with urgent priority queue 00:15:20.045 Starting thread on core 3 with urgent priority queue 00:15:20.045 Starting thread on core 0 with urgent priority queue 00:15:20.045 SPDK bdev Controller (SPDK1 ) core 0: 7662.00 IO/s 13.05 secs/100000 ios 00:15:20.045 SPDK bdev Controller (SPDK1 ) core 1: 8412.67 IO/s 11.89 secs/100000 ios 00:15:20.045 SPDK bdev Controller (SPDK1 ) core 2: 7585.00 IO/s 13.18 secs/100000 ios 00:15:20.045 SPDK bdev Controller (SPDK1 ) core 3: 7412.33 IO/s 13.49 secs/100000 ios 00:15:20.045 ======================================================== 00:15:20.045 00:15:20.045 22:14:14 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:20.045 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.045 Initializing NVMe Controllers 00:15:20.045 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.045 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:20.045 Namespace ID: 1 size: 0GB 00:15:20.045 Initialization complete. 00:15:20.045 INFO: using host memory buffer for IO 00:15:20.045 Hello world! 00:15:20.045 22:14:15 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:20.045 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.425 Initializing NVMe Controllers 00:15:21.425 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.425 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.425 Initialization complete. Launching workers. 00:15:21.425 submit (in ns) avg, min, max = 8305.4, 3253.9, 4000163.5 00:15:21.425 complete (in ns) avg, min, max = 20769.5, 1792.2, 3998681.7 00:15:21.425 00:15:21.425 Submit histogram 00:15:21.425 ================ 00:15:21.425 Range in us Cumulative Count 00:15:21.425 3.242 - 3.256: 0.0061% ( 1) 00:15:21.425 3.256 - 3.270: 0.0425% ( 6) 00:15:21.425 3.270 - 3.283: 0.3645% ( 53) 00:15:21.425 3.283 - 3.297: 1.9258% ( 257) 00:15:21.425 3.297 - 3.311: 5.4432% ( 579) 00:15:21.425 3.311 - 3.325: 9.5316% ( 673) 00:15:21.425 3.325 - 3.339: 15.0659% ( 911) 00:15:21.425 3.339 - 3.353: 21.2745% ( 1022) 00:15:21.425 3.353 - 3.367: 26.8088% ( 911) 00:15:21.425 3.367 - 3.381: 32.5132% ( 939) 00:15:21.425 3.381 - 3.395: 38.2055% ( 937) 00:15:21.425 3.395 - 3.409: 42.7738% ( 752) 00:15:21.425 3.409 - 3.423: 47.0628% ( 706) 00:15:21.425 3.423 - 3.437: 51.8741% ( 792) 00:15:21.425 3.437 - 3.450: 58.2225% ( 1045) 00:15:21.425 3.450 - 3.464: 63.3133% ( 838) 00:15:21.425 3.464 - 3.478: 68.2401% ( 811) 00:15:21.425 3.478 - 3.492: 73.8898% ( 930) 00:15:21.425 3.492 - 3.506: 78.0633% ( 687) 00:15:21.425 3.506 - 3.520: 81.3134% ( 535) 00:15:21.425 3.520 - 3.534: 83.7798% ( 406) 00:15:21.425 3.534 - 3.548: 85.6145% ( 302) 00:15:21.425 3.548 - 3.562: 86.5257% ( 150) 00:15:21.425 3.562 - 3.590: 87.7407% ( 200) 00:15:21.425 3.590 - 3.617: 88.9739% ( 203) 00:15:21.425 3.617 - 3.645: 90.6446% ( 275) 00:15:21.425 3.645 - 3.673: 92.3577% ( 282) 00:15:21.425 3.673 - 3.701: 94.0769% ( 283) 00:15:21.426 3.701 - 3.729: 95.9480% ( 308) 00:15:21.426 3.729 - 3.757: 97.3392% ( 229) 00:15:21.426 3.757 - 3.784: 98.1714% ( 137) 00:15:21.426 3.784 - 3.812: 98.8154% ( 106) 00:15:21.426 3.812 - 3.840: 99.1981% ( 63) 00:15:21.426 3.840 - 3.868: 99.4715% ( 45) 00:15:21.426 3.868 - 3.896: 99.5201% ( 8) 00:15:21.426 3.896 - 3.923: 99.5505% ( 5) 00:15:21.426 3.923 - 3.951: 99.5565% ( 1) 00:15:21.426 3.951 - 3.979: 99.5626% ( 1) 00:15:21.426 4.007 - 4.035: 99.5687% ( 1) 00:15:21.426 4.703 - 4.730: 99.5748% ( 1) 00:15:21.426 5.203 - 5.231: 99.5808% ( 1) 00:15:21.426 5.259 - 5.287: 99.5869% ( 1) 00:15:21.426 5.287 - 5.315: 99.5930% ( 1) 00:15:21.426 5.315 - 5.343: 99.5991% ( 1) 00:15:21.426 5.343 - 5.370: 99.6051% ( 1) 00:15:21.426 5.398 - 5.426: 99.6112% ( 1) 00:15:21.426 5.454 - 5.482: 99.6173% ( 1) 00:15:21.426 5.510 - 5.537: 99.6294% ( 2) 00:15:21.426 5.732 - 5.760: 99.6355% ( 1) 00:15:21.426 5.899 - 5.927: 99.6537% ( 3) 00:15:21.426 6.122 - 6.150: 99.6659% ( 2) 00:15:21.426 6.150 - 6.177: 99.6720% ( 1) 00:15:21.426 6.205 - 6.233: 99.6780% ( 1) 00:15:21.426 6.289 - 6.317: 99.6902% ( 2) 00:15:21.426 6.317 - 6.344: 99.6963% ( 1) 00:15:21.426 6.344 - 6.372: 99.7084% ( 2) 00:15:21.426 6.400 - 6.428: 99.7145% ( 1) 00:15:21.426 6.428 - 6.456: 99.7327% ( 3) 00:15:21.426 6.539 - 6.567: 99.7388% ( 1) 00:15:21.426 6.567 - 6.595: 99.7449% ( 1) 00:15:21.426 6.706 - 6.734: 99.7509% ( 1) 00:15:21.426 6.734 - 6.762: 99.7570% ( 1) 00:15:21.426 6.762 - 6.790: 99.7631% ( 1) 00:15:21.426 6.817 - 6.845: 99.7752% ( 2) 00:15:21.426 6.873 - 6.901: 99.7813% ( 1) 00:15:21.426 6.929 - 6.957: 99.7874% ( 1) 00:15:21.426 6.957 - 6.984: 99.7935% ( 1) 00:15:21.426 7.096 - 7.123: 99.7995% ( 1) 00:15:21.426 7.123 - 7.179: 99.8117% ( 2) 00:15:21.426 7.290 - 7.346: 99.8178% ( 1) 00:15:21.426 7.402 - 7.457: 99.8299% ( 2) 00:15:21.426 7.513 - 7.569: 99.8360% ( 1) 00:15:21.426 7.569 - 7.624: 99.8421% ( 1) 00:15:21.426 7.624 - 7.680: 99.8481% ( 1) 00:15:21.426 7.680 - 7.736: 99.8603% ( 2) 00:15:21.426 7.791 - 7.847: 99.8664% ( 1) 00:15:21.426 8.181 - 8.237: 99.8724% ( 1) 00:15:21.426 8.348 - 8.403: 99.8785% ( 1) 00:15:21.426 3903.666 - 3932.160: 99.8846% ( 1) 00:15:21.426 3989.148 - 4017.642: 100.0000% ( 19) 00:15:21.426 00:15:21.426 Complete histogram 00:15:21.426 ================== 00:15:21.426 Range in us Cumulative Count 00:15:21.426 1.781 - 1.795: 0.0182% ( 3) 00:15:21.426 1.795 - 1.809: 4.3315% ( 710) 00:15:21.426 1.809 - 1.823: 36.2493% ( 5254) 00:15:21.426 1.823 - 1.837: 65.0021% ( 4733) 00:15:21.426 1.837 - 1.850: 74.5398% ( 1570) 00:15:21.426 1.850 - 1.864: 84.2719% ( 1602) 00:15:21.426 1.864 - 1.878: 92.7161% ( 1390) 00:15:21.426 1.878 - 1.892: 96.1302% ( 562) 00:15:21.426 1.892 - 1.906: 98.1168% ( 327) 00:15:21.426 1.906 - 1.920: 99.0037% ( 146) 00:15:21.426 1.920 - 1.934: 99.1860% ( 30) 00:15:21.426 1.934 - 1.948: 99.2224% ( 6) 00:15:21.426 1.948 - 1.962: 99.2589% ( 6) 00:15:21.426 1.962 - 1.976: 99.2832% ( 4) 00:15:21.426 1.976 - 1.990: 99.2953% ( 2) 00:15:21.426 1.990 - 2.003: 99.3014% ( 1) 00:15:21.426 2.003 - 2.017: 99.3135% ( 2) 00:15:21.426 2.017 - 2.031: 99.3257% ( 2) 00:15:21.426 2.031 - 2.045: 99.3439% ( 3) 00:15:21.426 2.045 - 2.059: 99.3500% ( 1) 00:15:21.426 2.115 - 2.129: 99.3561% ( 1) 00:15:21.426 2.170 - 2.184: 99.3621% ( 1) 00:15:21.426 2.240 - 2.254: 99.3682% ( 1) 00:15:21.426 2.254 - 2.268: 99.3743% ( 1) 00:15:21.426 2.463 - 2.477: 99.3804% ( 1) 00:15:21.426 2.477 - 2.490: 99.3864% ( 1) 00:15:21.426 3.784 - 3.812: 99.3925% ( 1) 00:15:21.426 3.923 - 3.951: 99.3986% ( 1) 00:15:21.426 3.951 - 3.979: 99.4047% ( 1) 00:15:21.426 4.007 - 4.035: 99.4107% ( 1) 00:15:21.426 4.035 - 4.063: 99.4168% ( 1) 00:15:21.426 4.313 - 4.341: 99.4229% ( 1) 00:15:21.426 4.341 - 4.369: 99.4290% ( 1) 00:15:21.426 4.424 - 4.452: 99.4350% ( 1) 00:15:21.426 4.675 - 4.703: 99.4411% ( 1) 00:15:21.426 4.786 - 4.814: 99.4472% ( 1) 00:15:21.426 4.953 - 4.981: 99.4533% ( 1) 00:15:21.426 5.203 - 5.231: 99.4593% ( 1) 00:15:21.426 5.398 - 5.426: 99.4654% ( 1) 00:15:21.426 5.454 - 5.482: 99.4776% ( 2) 00:15:21.426 5.621 - 5.649: 99.4836% ( 1) 00:15:21.426 5.843 - 5.871: 99.4897% ( 1) 00:15:21.426 6.623 - 6.650: 99.4958% ( 1) 00:15:21.426 8.682 - 8.737: 99.5019% ( 1) 00:15:21.426 9.628 - 9.683: 99.5079% ( 1) 00:15:21.426 11.576 - 11.631: 99.5140% ( 1) 00:15:21.426 12.132 - 12.188: 99.5201% ( 1) 00:15:21.426 12.744 - 12.800: 99.5262% ( 1) 00:15:21.426 3989.148 - 4017.642: 100.0000% ( 78) 00:15:21.426 00:15:21.426 22:14:16 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:21.426 22:14:16 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.426 22:14:16 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.426 22:14:16 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:21.426 22:14:16 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.426 [2024-07-24 22:14:16.554056] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:21.686 [ 00:15:21.686 { 00:15:21.686 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.686 "subtype": "Discovery", 00:15:21.686 "listen_addresses": [], 00:15:21.686 "allow_any_host": true, 00:15:21.686 "hosts": [] 00:15:21.686 }, 00:15:21.686 { 00:15:21.686 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.686 "subtype": "NVMe", 00:15:21.686 "listen_addresses": [ 00:15:21.686 { 00:15:21.686 "transport": "VFIOUSER", 00:15:21.686 "trtype": "VFIOUSER", 00:15:21.686 "adrfam": "IPv4", 00:15:21.686 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.686 "trsvcid": "0" 00:15:21.686 } 00:15:21.686 ], 00:15:21.686 "allow_any_host": true, 00:15:21.686 "hosts": [], 00:15:21.686 "serial_number": "SPDK1", 00:15:21.686 "model_number": "SPDK bdev Controller", 00:15:21.686 "max_namespaces": 32, 00:15:21.686 "min_cntlid": 1, 00:15:21.686 "max_cntlid": 65519, 00:15:21.686 "namespaces": [ 00:15:21.686 { 00:15:21.686 "nsid": 1, 00:15:21.686 "bdev_name": "Malloc1", 00:15:21.686 "name": "Malloc1", 00:15:21.686 "nguid": "FCBC49ED32684D4AB74CE8B071FB53C8", 00:15:21.686 "uuid": "fcbc49ed-3268-4d4a-b74c-e8b071fb53c8" 00:15:21.686 } 00:15:21.686 ] 00:15:21.686 }, 00:15:21.686 { 00:15:21.686 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.686 "subtype": "NVMe", 00:15:21.686 "listen_addresses": [ 00:15:21.686 { 00:15:21.686 "transport": "VFIOUSER", 00:15:21.686 "trtype": "VFIOUSER", 00:15:21.686 "adrfam": "IPv4", 00:15:21.686 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.686 "trsvcid": "0" 00:15:21.686 } 00:15:21.686 ], 00:15:21.686 "allow_any_host": true, 00:15:21.686 "hosts": [], 00:15:21.686 "serial_number": "SPDK2", 00:15:21.686 "model_number": "SPDK bdev Controller", 00:15:21.686 "max_namespaces": 32, 00:15:21.686 "min_cntlid": 1, 00:15:21.686 "max_cntlid": 65519, 00:15:21.686 "namespaces": [ 00:15:21.686 { 00:15:21.686 "nsid": 1, 00:15:21.686 "bdev_name": "Malloc2", 00:15:21.686 "name": "Malloc2", 00:15:21.686 "nguid": "5B7C8FD0564A4F96AC3FEEEEE9C29317", 00:15:21.686 "uuid": "5b7c8fd0-564a-4f96-ac3f-eeeee9c29317" 00:15:21.686 } 00:15:21.686 ] 00:15:21.686 } 00:15:21.686 ] 00:15:21.687 22:14:16 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:21.687 22:14:16 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3514374 00:15:21.687 22:14:16 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:21.687 22:14:16 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:21.687 22:14:16 -- common/autotest_common.sh@1244 -- # local i=0 00:15:21.687 22:14:16 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.687 22:14:16 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.687 22:14:16 -- common/autotest_common.sh@1255 -- # return 0 00:15:21.687 22:14:16 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:21.687 22:14:16 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:21.687 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.687 Malloc3 00:15:21.687 22:14:16 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:21.947 22:14:16 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.947 Asynchronous Event Request test 00:15:21.947 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.947 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.947 Registering asynchronous event callbacks... 00:15:21.947 Starting namespace attribute notice tests for all controllers... 00:15:21.947 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:21.947 aer_cb - Changed Namespace 00:15:21.947 Cleaning up... 00:15:22.208 [ 00:15:22.208 { 00:15:22.208 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.208 "subtype": "Discovery", 00:15:22.208 "listen_addresses": [], 00:15:22.208 "allow_any_host": true, 00:15:22.208 "hosts": [] 00:15:22.208 }, 00:15:22.208 { 00:15:22.208 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:22.208 "subtype": "NVMe", 00:15:22.208 "listen_addresses": [ 00:15:22.208 { 00:15:22.208 "transport": "VFIOUSER", 00:15:22.208 "trtype": "VFIOUSER", 00:15:22.208 "adrfam": "IPv4", 00:15:22.208 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:22.208 "trsvcid": "0" 00:15:22.208 } 00:15:22.208 ], 00:15:22.208 "allow_any_host": true, 00:15:22.208 "hosts": [], 00:15:22.208 "serial_number": "SPDK1", 00:15:22.208 "model_number": "SPDK bdev Controller", 00:15:22.208 "max_namespaces": 32, 00:15:22.208 "min_cntlid": 1, 00:15:22.208 "max_cntlid": 65519, 00:15:22.208 "namespaces": [ 00:15:22.208 { 00:15:22.209 "nsid": 1, 00:15:22.209 "bdev_name": "Malloc1", 00:15:22.209 "name": "Malloc1", 00:15:22.209 "nguid": "FCBC49ED32684D4AB74CE8B071FB53C8", 00:15:22.209 "uuid": "fcbc49ed-3268-4d4a-b74c-e8b071fb53c8" 00:15:22.209 }, 00:15:22.209 { 00:15:22.209 "nsid": 2, 00:15:22.209 "bdev_name": "Malloc3", 00:15:22.209 "name": "Malloc3", 00:15:22.209 "nguid": "9937446DCEE04ADBBAE2D8238AF2ED19", 00:15:22.209 "uuid": "9937446d-cee0-4adb-bae2-d8238af2ed19" 00:15:22.209 } 00:15:22.209 ] 00:15:22.209 }, 00:15:22.209 { 00:15:22.209 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:22.209 "subtype": "NVMe", 00:15:22.209 "listen_addresses": [ 00:15:22.209 { 00:15:22.209 "transport": "VFIOUSER", 00:15:22.209 "trtype": "VFIOUSER", 00:15:22.209 "adrfam": "IPv4", 00:15:22.209 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:22.209 "trsvcid": "0" 00:15:22.209 } 00:15:22.209 ], 00:15:22.209 "allow_any_host": true, 00:15:22.209 "hosts": [], 00:15:22.209 "serial_number": "SPDK2", 00:15:22.209 "model_number": "SPDK bdev Controller", 00:15:22.209 "max_namespaces": 32, 00:15:22.209 "min_cntlid": 1, 00:15:22.209 "max_cntlid": 65519, 00:15:22.209 "namespaces": [ 00:15:22.209 { 00:15:22.209 "nsid": 1, 00:15:22.209 "bdev_name": "Malloc2", 00:15:22.209 "name": "Malloc2", 00:15:22.209 "nguid": "5B7C8FD0564A4F96AC3FEEEEE9C29317", 00:15:22.209 "uuid": "5b7c8fd0-564a-4f96-ac3f-eeeee9c29317" 00:15:22.209 } 00:15:22.209 ] 00:15:22.209 } 00:15:22.209 ] 00:15:22.209 22:14:17 -- target/nvmf_vfio_user.sh@44 -- # wait 3514374 00:15:22.209 22:14:17 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.209 22:14:17 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:22.209 22:14:17 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:22.209 22:14:17 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:22.209 [2024-07-24 22:14:17.153232] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:22.209 [2024-07-24 22:14:17.153261] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514385 ] 00:15:22.209 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.209 [2024-07-24 22:14:17.181310] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:22.209 [2024-07-24 22:14:17.190288] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.209 [2024-07-24 22:14:17.190310] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2800bac000 00:15:22.209 [2024-07-24 22:14:17.191295] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.209 [2024-07-24 22:14:17.192294] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.209 [2024-07-24 22:14:17.193302] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.209 [2024-07-24 22:14:17.194308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.209 [2024-07-24 22:14:17.195310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.209 [2024-07-24 22:14:17.196316] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.209 [2024-07-24 22:14:17.197329] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.209 [2024-07-24 22:14:17.198342] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.209 [2024-07-24 22:14:17.199348] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.209 [2024-07-24 22:14:17.199357] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f27ff973000 00:15:22.209 [2024-07-24 22:14:17.200411] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.209 [2024-07-24 22:14:17.213697] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:22.209 [2024-07-24 22:14:17.213720] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:22.209 [2024-07-24 22:14:17.218794] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:22.209 [2024-07-24 22:14:17.218830] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:22.209 [2024-07-24 22:14:17.218899] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:22.209 [2024-07-24 22:14:17.218915] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:22.209 [2024-07-24 22:14:17.218920] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:22.209 [2024-07-24 22:14:17.219800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:22.209 [2024-07-24 22:14:17.219809] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:22.209 [2024-07-24 22:14:17.219815] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:22.209 [2024-07-24 22:14:17.220807] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:22.209 [2024-07-24 22:14:17.220816] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:22.209 [2024-07-24 22:14:17.220822] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:22.209 [2024-07-24 22:14:17.221813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:22.209 [2024-07-24 22:14:17.221821] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:22.209 [2024-07-24 22:14:17.222828] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:22.209 [2024-07-24 22:14:17.222836] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:22.209 [2024-07-24 22:14:17.222841] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:22.209 [2024-07-24 22:14:17.222846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:22.209 [2024-07-24 22:14:17.222951] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:22.209 [2024-07-24 22:14:17.222956] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:22.209 [2024-07-24 22:14:17.222960] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:22.209 [2024-07-24 22:14:17.223835] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:22.209 [2024-07-24 22:14:17.224845] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:22.209 [2024-07-24 22:14:17.225849] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:22.209 [2024-07-24 22:14:17.226865] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:22.209 [2024-07-24 22:14:17.227857] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:22.209 [2024-07-24 22:14:17.227865] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:22.209 [2024-07-24 22:14:17.227869] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:22.209 [2024-07-24 22:14:17.227886] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:22.209 [2024-07-24 22:14:17.227892] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:22.209 [2024-07-24 22:14:17.227902] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.209 [2024-07-24 22:14:17.227906] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.210 [2024-07-24 22:14:17.227917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.235051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.235062] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:22.210 [2024-07-24 22:14:17.235067] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:22.210 [2024-07-24 22:14:17.235070] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:22.210 [2024-07-24 22:14:17.235075] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:22.210 [2024-07-24 22:14:17.235079] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:22.210 [2024-07-24 22:14:17.235083] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:22.210 [2024-07-24 22:14:17.235087] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.235096] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.235105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.243048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.243062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.210 [2024-07-24 22:14:17.243069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.210 [2024-07-24 22:14:17.243077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.210 [2024-07-24 22:14:17.243084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.210 [2024-07-24 22:14:17.243088] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.243095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.243103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.251050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.251057] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:22.210 [2024-07-24 22:14:17.251061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.251067] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.251074] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.251082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.259048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.259103] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.259110] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.259117] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:22.210 [2024-07-24 22:14:17.259121] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:22.210 [2024-07-24 22:14:17.259127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.267049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.267062] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:22.210 [2024-07-24 22:14:17.267072] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.267079] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.267084] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.210 [2024-07-24 22:14:17.267089] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.210 [2024-07-24 22:14:17.267095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.275048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.275061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.275068] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.275074] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.210 [2024-07-24 22:14:17.275078] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.210 [2024-07-24 22:14:17.275083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.283051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.283060] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.283066] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.283073] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.283079] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.283083] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.283087] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:22.210 [2024-07-24 22:14:17.283091] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:22.210 [2024-07-24 22:14:17.283098] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:22.210 [2024-07-24 22:14:17.283113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.291049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.291062] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.299050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.299061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.307050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.307068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.315047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:22.210 [2024-07-24 22:14:17.315059] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:22.210 [2024-07-24 22:14:17.315063] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:22.210 [2024-07-24 22:14:17.315066] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:22.210 [2024-07-24 22:14:17.315069] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:22.210 [2024-07-24 22:14:17.315075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:22.210 [2024-07-24 22:14:17.315081] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:22.210 [2024-07-24 22:14:17.315085] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:22.210 [2024-07-24 22:14:17.315090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.315096] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:22.210 [2024-07-24 22:14:17.315100] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.210 [2024-07-24 22:14:17.315105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.210 [2024-07-24 22:14:17.315112] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:22.211 [2024-07-24 22:14:17.315115] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:22.211 [2024-07-24 22:14:17.315121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:22.211 [2024-07-24 22:14:17.323048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:22.211 [2024-07-24 22:14:17.323064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:22.211 [2024-07-24 22:14:17.323072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:22.211 [2024-07-24 22:14:17.323078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:22.211 ===================================================== 00:15:22.211 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:22.211 ===================================================== 00:15:22.211 Controller Capabilities/Features 00:15:22.211 ================================ 00:15:22.211 Vendor ID: 4e58 00:15:22.211 Subsystem Vendor ID: 4e58 00:15:22.211 Serial Number: SPDK2 00:15:22.211 Model Number: SPDK bdev Controller 00:15:22.211 Firmware Version: 24.01.1 00:15:22.211 Recommended Arb Burst: 6 00:15:22.211 IEEE OUI Identifier: 8d 6b 50 00:15:22.211 Multi-path I/O 00:15:22.211 May have multiple subsystem ports: Yes 00:15:22.211 May have multiple controllers: Yes 00:15:22.211 Associated with SR-IOV VF: No 00:15:22.211 Max Data Transfer Size: 131072 00:15:22.211 Max Number of Namespaces: 32 00:15:22.211 Max Number of I/O Queues: 127 00:15:22.211 NVMe Specification Version (VS): 1.3 00:15:22.211 NVMe Specification Version (Identify): 1.3 00:15:22.211 Maximum Queue Entries: 256 00:15:22.211 Contiguous Queues Required: Yes 00:15:22.211 Arbitration Mechanisms Supported 00:15:22.211 Weighted Round Robin: Not Supported 00:15:22.211 Vendor Specific: Not Supported 00:15:22.211 Reset Timeout: 15000 ms 00:15:22.211 Doorbell Stride: 4 bytes 00:15:22.211 NVM Subsystem Reset: Not Supported 00:15:22.211 Command Sets Supported 00:15:22.211 NVM Command Set: Supported 00:15:22.211 Boot Partition: Not Supported 00:15:22.211 Memory Page Size Minimum: 4096 bytes 00:15:22.211 Memory Page Size Maximum: 4096 bytes 00:15:22.211 Persistent Memory Region: Not Supported 00:15:22.211 Optional Asynchronous Events Supported 00:15:22.211 Namespace Attribute Notices: Supported 00:15:22.211 Firmware Activation Notices: Not Supported 00:15:22.211 ANA Change Notices: Not Supported 00:15:22.211 PLE Aggregate Log Change Notices: Not Supported 00:15:22.211 LBA Status Info Alert Notices: Not Supported 00:15:22.211 EGE Aggregate Log Change Notices: Not Supported 00:15:22.211 Normal NVM Subsystem Shutdown event: Not Supported 00:15:22.211 Zone Descriptor Change Notices: Not Supported 00:15:22.211 Discovery Log Change Notices: Not Supported 00:15:22.211 Controller Attributes 00:15:22.211 128-bit Host Identifier: Supported 00:15:22.211 Non-Operational Permissive Mode: Not Supported 00:15:22.211 NVM Sets: Not Supported 00:15:22.211 Read Recovery Levels: Not Supported 00:15:22.211 Endurance Groups: Not Supported 00:15:22.211 Predictable Latency Mode: Not Supported 00:15:22.211 Traffic Based Keep ALive: Not Supported 00:15:22.211 Namespace Granularity: Not Supported 00:15:22.211 SQ Associations: Not Supported 00:15:22.211 UUID List: Not Supported 00:15:22.211 Multi-Domain Subsystem: Not Supported 00:15:22.211 Fixed Capacity Management: Not Supported 00:15:22.211 Variable Capacity Management: Not Supported 00:15:22.211 Delete Endurance Group: Not Supported 00:15:22.211 Delete NVM Set: Not Supported 00:15:22.211 Extended LBA Formats Supported: Not Supported 00:15:22.211 Flexible Data Placement Supported: Not Supported 00:15:22.211 00:15:22.211 Controller Memory Buffer Support 00:15:22.211 ================================ 00:15:22.211 Supported: No 00:15:22.211 00:15:22.211 Persistent Memory Region Support 00:15:22.211 ================================ 00:15:22.211 Supported: No 00:15:22.211 00:15:22.211 Admin Command Set Attributes 00:15:22.211 ============================ 00:15:22.211 Security Send/Receive: Not Supported 00:15:22.211 Format NVM: Not Supported 00:15:22.211 Firmware Activate/Download: Not Supported 00:15:22.211 Namespace Management: Not Supported 00:15:22.211 Device Self-Test: Not Supported 00:15:22.211 Directives: Not Supported 00:15:22.211 NVMe-MI: Not Supported 00:15:22.211 Virtualization Management: Not Supported 00:15:22.211 Doorbell Buffer Config: Not Supported 00:15:22.211 Get LBA Status Capability: Not Supported 00:15:22.211 Command & Feature Lockdown Capability: Not Supported 00:15:22.211 Abort Command Limit: 4 00:15:22.211 Async Event Request Limit: 4 00:15:22.211 Number of Firmware Slots: N/A 00:15:22.211 Firmware Slot 1 Read-Only: N/A 00:15:22.211 Firmware Activation Without Reset: N/A 00:15:22.211 Multiple Update Detection Support: N/A 00:15:22.211 Firmware Update Granularity: No Information Provided 00:15:22.211 Per-Namespace SMART Log: No 00:15:22.211 Asymmetric Namespace Access Log Page: Not Supported 00:15:22.211 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:22.211 Command Effects Log Page: Supported 00:15:22.211 Get Log Page Extended Data: Supported 00:15:22.211 Telemetry Log Pages: Not Supported 00:15:22.211 Persistent Event Log Pages: Not Supported 00:15:22.211 Supported Log Pages Log Page: May Support 00:15:22.211 Commands Supported & Effects Log Page: Not Supported 00:15:22.211 Feature Identifiers & Effects Log Page:May Support 00:15:22.211 NVMe-MI Commands & Effects Log Page: May Support 00:15:22.211 Data Area 4 for Telemetry Log: Not Supported 00:15:22.211 Error Log Page Entries Supported: 128 00:15:22.211 Keep Alive: Supported 00:15:22.211 Keep Alive Granularity: 10000 ms 00:15:22.211 00:15:22.211 NVM Command Set Attributes 00:15:22.211 ========================== 00:15:22.211 Submission Queue Entry Size 00:15:22.211 Max: 64 00:15:22.211 Min: 64 00:15:22.211 Completion Queue Entry Size 00:15:22.211 Max: 16 00:15:22.211 Min: 16 00:15:22.211 Number of Namespaces: 32 00:15:22.211 Compare Command: Supported 00:15:22.211 Write Uncorrectable Command: Not Supported 00:15:22.211 Dataset Management Command: Supported 00:15:22.211 Write Zeroes Command: Supported 00:15:22.211 Set Features Save Field: Not Supported 00:15:22.211 Reservations: Not Supported 00:15:22.211 Timestamp: Not Supported 00:15:22.211 Copy: Supported 00:15:22.211 Volatile Write Cache: Present 00:15:22.211 Atomic Write Unit (Normal): 1 00:15:22.211 Atomic Write Unit (PFail): 1 00:15:22.211 Atomic Compare & Write Unit: 1 00:15:22.211 Fused Compare & Write: Supported 00:15:22.211 Scatter-Gather List 00:15:22.211 SGL Command Set: Supported (Dword aligned) 00:15:22.212 SGL Keyed: Not Supported 00:15:22.212 SGL Bit Bucket Descriptor: Not Supported 00:15:22.212 SGL Metadata Pointer: Not Supported 00:15:22.212 Oversized SGL: Not Supported 00:15:22.212 SGL Metadata Address: Not Supported 00:15:22.212 SGL Offset: Not Supported 00:15:22.212 Transport SGL Data Block: Not Supported 00:15:22.212 Replay Protected Memory Block: Not Supported 00:15:22.212 00:15:22.212 Firmware Slot Information 00:15:22.212 ========================= 00:15:22.212 Active slot: 1 00:15:22.212 Slot 1 Firmware Revision: 24.01.1 00:15:22.212 00:15:22.212 00:15:22.212 Commands Supported and Effects 00:15:22.212 ============================== 00:15:22.212 Admin Commands 00:15:22.212 -------------- 00:15:22.212 Get Log Page (02h): Supported 00:15:22.212 Identify (06h): Supported 00:15:22.212 Abort (08h): Supported 00:15:22.212 Set Features (09h): Supported 00:15:22.212 Get Features (0Ah): Supported 00:15:22.212 Asynchronous Event Request (0Ch): Supported 00:15:22.212 Keep Alive (18h): Supported 00:15:22.212 I/O Commands 00:15:22.212 ------------ 00:15:22.212 Flush (00h): Supported LBA-Change 00:15:22.212 Write (01h): Supported LBA-Change 00:15:22.212 Read (02h): Supported 00:15:22.212 Compare (05h): Supported 00:15:22.212 Write Zeroes (08h): Supported LBA-Change 00:15:22.212 Dataset Management (09h): Supported LBA-Change 00:15:22.212 Copy (19h): Supported LBA-Change 00:15:22.212 Unknown (79h): Supported LBA-Change 00:15:22.212 Unknown (7Ah): Supported 00:15:22.212 00:15:22.212 Error Log 00:15:22.212 ========= 00:15:22.212 00:15:22.212 Arbitration 00:15:22.212 =========== 00:15:22.212 Arbitration Burst: 1 00:15:22.212 00:15:22.212 Power Management 00:15:22.212 ================ 00:15:22.212 Number of Power States: 1 00:15:22.212 Current Power State: Power State #0 00:15:22.212 Power State #0: 00:15:22.212 Max Power: 0.00 W 00:15:22.212 Non-Operational State: Operational 00:15:22.212 Entry Latency: Not Reported 00:15:22.212 Exit Latency: Not Reported 00:15:22.212 Relative Read Throughput: 0 00:15:22.212 Relative Read Latency: 0 00:15:22.212 Relative Write Throughput: 0 00:15:22.212 Relative Write Latency: 0 00:15:22.212 Idle Power: Not Reported 00:15:22.212 Active Power: Not Reported 00:15:22.212 Non-Operational Permissive Mode: Not Supported 00:15:22.212 00:15:22.212 Health Information 00:15:22.212 ================== 00:15:22.212 Critical Warnings: 00:15:22.212 Available Spare Space: OK 00:15:22.212 Temperature: OK 00:15:22.212 Device Reliability: OK 00:15:22.212 Read Only: No 00:15:22.212 Volatile Memory Backup: OK 00:15:22.212 Current Temperature: 0 Kelvin[2024-07-24 22:14:17.323170] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:22.212 [2024-07-24 22:14:17.331049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:22.212 [2024-07-24 22:14:17.331074] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:22.212 [2024-07-24 22:14:17.331083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.212 [2024-07-24 22:14:17.331088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.212 [2024-07-24 22:14:17.331094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.212 [2024-07-24 22:14:17.331099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.212 [2024-07-24 22:14:17.331137] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:22.212 [2024-07-24 22:14:17.331146] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:22.212 [2024-07-24 22:14:17.332169] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:22.212 [2024-07-24 22:14:17.332176] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:22.212 [2024-07-24 22:14:17.333152] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:22.212 [2024-07-24 22:14:17.333162] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:22.212 [2024-07-24 22:14:17.333207] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:22.212 [2024-07-24 22:14:17.334301] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.472 (-273 Celsius) 00:15:22.472 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:22.472 Available Spare: 0% 00:15:22.472 Available Spare Threshold: 0% 00:15:22.472 Life Percentage Used: 0% 00:15:22.472 Data Units Read: 0 00:15:22.472 Data Units Written: 0 00:15:22.472 Host Read Commands: 0 00:15:22.472 Host Write Commands: 0 00:15:22.472 Controller Busy Time: 0 minutes 00:15:22.472 Power Cycles: 0 00:15:22.472 Power On Hours: 0 hours 00:15:22.472 Unsafe Shutdowns: 0 00:15:22.472 Unrecoverable Media Errors: 0 00:15:22.472 Lifetime Error Log Entries: 0 00:15:22.472 Warning Temperature Time: 0 minutes 00:15:22.472 Critical Temperature Time: 0 minutes 00:15:22.472 00:15:22.472 Number of Queues 00:15:22.472 ================ 00:15:22.472 Number of I/O Submission Queues: 127 00:15:22.472 Number of I/O Completion Queues: 127 00:15:22.472 00:15:22.472 Active Namespaces 00:15:22.472 ================= 00:15:22.472 Namespace ID:1 00:15:22.472 Error Recovery Timeout: Unlimited 00:15:22.472 Command Set Identifier: NVM (00h) 00:15:22.472 Deallocate: Supported 00:15:22.472 Deallocated/Unwritten Error: Not Supported 00:15:22.472 Deallocated Read Value: Unknown 00:15:22.472 Deallocate in Write Zeroes: Not Supported 00:15:22.472 Deallocated Guard Field: 0xFFFF 00:15:22.472 Flush: Supported 00:15:22.472 Reservation: Supported 00:15:22.472 Namespace Sharing Capabilities: Multiple Controllers 00:15:22.472 Size (in LBAs): 131072 (0GiB) 00:15:22.472 Capacity (in LBAs): 131072 (0GiB) 00:15:22.472 Utilization (in LBAs): 131072 (0GiB) 00:15:22.472 NGUID: 5B7C8FD0564A4F96AC3FEEEEE9C29317 00:15:22.472 UUID: 5b7c8fd0-564a-4f96-ac3f-eeeee9c29317 00:15:22.472 Thin Provisioning: Not Supported 00:15:22.472 Per-NS Atomic Units: Yes 00:15:22.472 Atomic Boundary Size (Normal): 0 00:15:22.472 Atomic Boundary Size (PFail): 0 00:15:22.472 Atomic Boundary Offset: 0 00:15:22.472 Maximum Single Source Range Length: 65535 00:15:22.472 Maximum Copy Length: 65535 00:15:22.472 Maximum Source Range Count: 1 00:15:22.472 NGUID/EUI64 Never Reused: No 00:15:22.472 Namespace Write Protected: No 00:15:22.472 Number of LBA Formats: 1 00:15:22.472 Current LBA Format: LBA Format #00 00:15:22.472 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:22.472 00:15:22.472 22:14:17 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:22.472 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.753 Initializing NVMe Controllers 00:15:27.753 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:27.753 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:27.753 Initialization complete. Launching workers. 00:15:27.753 ======================================================== 00:15:27.753 Latency(us) 00:15:27.753 Device Information : IOPS MiB/s Average min max 00:15:27.753 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39918.88 155.93 3206.11 964.48 6767.85 00:15:27.753 ======================================================== 00:15:27.753 Total : 39918.88 155.93 3206.11 964.48 6767.85 00:15:27.753 00:15:27.753 22:14:22 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.753 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.036 Initializing NVMe Controllers 00:15:33.036 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:33.036 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:33.036 Initialization complete. Launching workers. 00:15:33.036 ======================================================== 00:15:33.036 Latency(us) 00:15:33.036 Device Information : IOPS MiB/s Average min max 00:15:33.036 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39933.53 155.99 3205.15 952.56 6672.71 00:15:33.036 ======================================================== 00:15:33.036 Total : 39933.53 155.99 3205.15 952.56 6672.71 00:15:33.036 00:15:33.036 22:14:27 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:33.036 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.318 Initializing NVMe Controllers 00:15:38.318 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.318 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:38.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:38.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:38.318 Initialization complete. Launching workers. 00:15:38.318 Starting thread on core 2 00:15:38.318 Starting thread on core 3 00:15:38.318 Starting thread on core 1 00:15:38.318 22:14:33 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:38.318 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.616 Initializing NVMe Controllers 00:15:41.616 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.616 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.616 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:41.616 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:41.616 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:41.616 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:41.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.616 Initialization complete. Launching workers. 00:15:41.616 Starting thread on core 1 with urgent priority queue 00:15:41.616 Starting thread on core 2 with urgent priority queue 00:15:41.616 Starting thread on core 3 with urgent priority queue 00:15:41.616 Starting thread on core 0 with urgent priority queue 00:15:41.616 SPDK bdev Controller (SPDK2 ) core 0: 7762.67 IO/s 12.88 secs/100000 ios 00:15:41.616 SPDK bdev Controller (SPDK2 ) core 1: 8520.00 IO/s 11.74 secs/100000 ios 00:15:41.616 SPDK bdev Controller (SPDK2 ) core 2: 8063.00 IO/s 12.40 secs/100000 ios 00:15:41.616 SPDK bdev Controller (SPDK2 ) core 3: 9173.00 IO/s 10.90 secs/100000 ios 00:15:41.616 ======================================================== 00:15:41.616 00:15:41.616 22:14:36 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.616 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.875 Initializing NVMe Controllers 00:15:41.875 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.875 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.875 Namespace ID: 1 size: 0GB 00:15:41.875 Initialization complete. 00:15:41.875 INFO: using host memory buffer for IO 00:15:41.875 Hello world! 00:15:41.875 22:14:36 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.875 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.254 Initializing NVMe Controllers 00:15:43.254 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.254 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.254 Initialization complete. Launching workers. 00:15:43.254 submit (in ns) avg, min, max = 8306.4, 3179.1, 4000964.3 00:15:43.254 complete (in ns) avg, min, max = 18787.8, 1742.6, 6988921.7 00:15:43.254 00:15:43.254 Submit histogram 00:15:43.254 ================ 00:15:43.254 Range in us Cumulative Count 00:15:43.254 3.172 - 3.186: 0.0061% ( 1) 00:15:43.254 3.186 - 3.200: 0.0182% ( 2) 00:15:43.254 3.200 - 3.214: 0.0424% ( 4) 00:15:43.254 3.214 - 3.228: 0.0606% ( 3) 00:15:43.254 3.228 - 3.242: 0.1455% ( 14) 00:15:43.254 3.242 - 3.256: 0.3213% ( 29) 00:15:43.254 3.256 - 3.270: 0.8183% ( 82) 00:15:43.254 3.270 - 3.283: 1.9216% ( 182) 00:15:43.255 3.283 - 3.297: 3.4976% ( 260) 00:15:43.255 3.297 - 3.311: 6.2617% ( 456) 00:15:43.255 3.311 - 3.325: 10.5049% ( 700) 00:15:43.255 3.325 - 3.339: 15.3422% ( 798) 00:15:43.255 3.339 - 3.353: 20.7977% ( 900) 00:15:43.255 3.353 - 3.367: 26.8534% ( 999) 00:15:43.255 3.367 - 3.381: 32.1574% ( 875) 00:15:43.255 3.381 - 3.395: 36.9885% ( 797) 00:15:43.255 3.395 - 3.409: 41.9531% ( 819) 00:15:43.255 3.409 - 3.423: 47.0267% ( 837) 00:15:43.255 3.423 - 3.437: 51.5670% ( 749) 00:15:43.255 3.437 - 3.450: 56.1799% ( 761) 00:15:43.255 3.450 - 3.464: 62.4356% ( 1032) 00:15:43.255 3.464 - 3.478: 67.4365% ( 825) 00:15:43.255 3.478 - 3.492: 71.3827% ( 651) 00:15:43.255 3.492 - 3.506: 76.2805% ( 808) 00:15:43.255 3.506 - 3.520: 79.9176% ( 600) 00:15:43.255 3.520 - 3.534: 82.6938% ( 458) 00:15:43.255 3.534 - 3.548: 84.7669% ( 342) 00:15:43.255 3.548 - 3.562: 85.9853% ( 201) 00:15:43.255 3.562 - 3.590: 87.5432% ( 257) 00:15:43.255 3.590 - 3.617: 88.8343% ( 213) 00:15:43.255 3.617 - 3.645: 90.5619% ( 285) 00:15:43.255 3.645 - 3.673: 92.1622% ( 264) 00:15:43.255 3.673 - 3.701: 93.9080% ( 288) 00:15:43.255 3.701 - 3.729: 95.6356% ( 285) 00:15:43.255 3.729 - 3.757: 97.1025% ( 242) 00:15:43.255 3.757 - 3.784: 98.1633% ( 175) 00:15:43.255 3.784 - 3.812: 98.7270% ( 93) 00:15:43.255 3.812 - 3.840: 99.0847% ( 59) 00:15:43.255 3.840 - 3.868: 99.3393% ( 42) 00:15:43.255 3.868 - 3.896: 99.5211% ( 30) 00:15:43.255 3.896 - 3.923: 99.5575% ( 6) 00:15:43.255 3.923 - 3.951: 99.5817% ( 4) 00:15:43.255 3.951 - 3.979: 99.5878% ( 1) 00:15:43.255 4.035 - 4.063: 99.5939% ( 1) 00:15:43.255 5.315 - 5.343: 99.5999% ( 1) 00:15:43.255 5.343 - 5.370: 99.6060% ( 1) 00:15:43.255 5.482 - 5.510: 99.6121% ( 1) 00:15:43.255 5.510 - 5.537: 99.6181% ( 1) 00:15:43.255 5.593 - 5.621: 99.6242% ( 1) 00:15:43.255 5.704 - 5.732: 99.6302% ( 1) 00:15:43.255 5.732 - 5.760: 99.6363% ( 1) 00:15:43.255 5.816 - 5.843: 99.6484% ( 2) 00:15:43.255 5.899 - 5.927: 99.6605% ( 2) 00:15:43.255 5.927 - 5.955: 99.6666% ( 1) 00:15:43.255 6.010 - 6.038: 99.6727% ( 1) 00:15:43.255 6.066 - 6.094: 99.6787% ( 1) 00:15:43.255 6.400 - 6.428: 99.6848% ( 1) 00:15:43.255 6.456 - 6.483: 99.6909% ( 1) 00:15:43.255 6.539 - 6.567: 99.6969% ( 1) 00:15:43.255 6.567 - 6.595: 99.7030% ( 1) 00:15:43.255 6.706 - 6.734: 99.7090% ( 1) 00:15:43.255 6.762 - 6.790: 99.7151% ( 1) 00:15:43.255 6.901 - 6.929: 99.7272% ( 2) 00:15:43.255 7.179 - 7.235: 99.7333% ( 1) 00:15:43.255 7.290 - 7.346: 99.7393% ( 1) 00:15:43.255 7.346 - 7.402: 99.7454% ( 1) 00:15:43.255 7.457 - 7.513: 99.7515% ( 1) 00:15:43.255 7.513 - 7.569: 99.7575% ( 1) 00:15:43.255 7.736 - 7.791: 99.7636% ( 1) 00:15:43.255 7.903 - 7.958: 99.7697% ( 1) 00:15:43.255 8.014 - 8.070: 99.7757% ( 1) 00:15:43.255 8.237 - 8.292: 99.7818% ( 1) 00:15:43.255 8.348 - 8.403: 99.7939% ( 2) 00:15:43.255 8.570 - 8.626: 99.8000% ( 1) 00:15:43.255 8.793 - 8.849: 99.8060% ( 1) 00:15:43.255 8.849 - 8.904: 99.8181% ( 2) 00:15:43.255 8.904 - 8.960: 99.8242% ( 1) 00:15:43.255 9.294 - 9.350: 99.8303% ( 1) 00:15:43.255 9.350 - 9.405: 99.8363% ( 1) 00:15:43.255 9.850 - 9.906: 99.8424% ( 1) 00:15:43.255 10.073 - 10.129: 99.8485% ( 1) 00:15:43.255 11.631 - 11.687: 99.8545% ( 1) 00:15:43.255 13.746 - 13.802: 99.8606% ( 1) 00:15:43.255 15.471 - 15.583: 99.8666% ( 1) 00:15:43.255 19.256 - 19.367: 99.8788% ( 2) 00:15:43.255 3989.148 - 4017.642: 100.0000% ( 20) 00:15:43.255 00:15:43.255 Complete histogram 00:15:43.255 ================== 00:15:43.255 Range in us Cumulative Count 00:15:43.255 1.739 - 1.746: 0.0485% ( 8) 00:15:43.255 1.746 - 1.753: 0.2182% ( 28) 00:15:43.255 1.753 - 1.760: 0.5819% ( 60) 00:15:43.255 1.760 - 1.767: 0.9456% ( 60) 00:15:43.255 1.767 - 1.774: 1.2427% ( 49) 00:15:43.255 1.774 - 1.781: 1.3699% ( 21) 00:15:43.255 1.781 - 1.795: 2.0004% ( 104) 00:15:43.255 1.795 - 1.809: 17.1061% ( 2492) 00:15:43.255 1.809 - 1.823: 58.0045% ( 6747) 00:15:43.255 1.823 - 1.837: 83.2697% ( 4168) 00:15:43.255 1.837 - 1.850: 89.2344% ( 984) 00:15:43.255 1.850 - 1.864: 94.3323% ( 841) 00:15:43.255 1.864 - 1.878: 97.5693% ( 534) 00:15:43.255 1.878 - 1.892: 98.5816% ( 167) 00:15:43.255 1.892 - 1.906: 99.0362% ( 75) 00:15:43.255 1.906 - 1.920: 99.1635% ( 21) 00:15:43.255 1.920 - 1.934: 99.1938% ( 5) 00:15:43.255 1.934 - 1.948: 99.1999% ( 1) 00:15:43.255 1.948 - 1.962: 99.2120% ( 2) 00:15:43.255 1.976 - 1.990: 99.2302% ( 3) 00:15:43.255 1.990 - 2.003: 99.2544% ( 4) 00:15:43.255 2.003 - 2.017: 99.2787% ( 4) 00:15:43.255 2.031 - 2.045: 99.2968% ( 3) 00:15:43.255 2.045 - 2.059: 99.3090% ( 2) 00:15:43.255 2.059 - 2.073: 99.3211% ( 2) 00:15:43.255 2.101 - 2.115: 99.3332% ( 2) 00:15:43.255 2.212 - 2.226: 99.3453% ( 2) 00:15:43.255 2.254 - 2.268: 99.3514% ( 1) 00:15:43.255 2.268 - 2.282: 99.3575% ( 1) 00:15:43.255 3.673 - 3.701: 99.3635% ( 1) 00:15:43.255 3.757 - 3.784: 99.3696% ( 1) 00:15:43.255 4.035 - 4.063: 99.3756% ( 1) 00:15:43.255 4.090 - 4.118: 99.3817% ( 1) 00:15:43.255 4.118 - 4.146: 99.3878% ( 1) 00:15:43.255 4.174 - 4.202: 99.3938% ( 1) 00:15:43.255 4.202 - 4.230: 99.3999% ( 1) 00:15:43.255 4.703 - 4.730: 99.4060% ( 1) 00:15:43.255 4.758 - 4.786: 99.4120% ( 1) 00:15:43.255 4.897 - 4.925: 99.4181% ( 1) 00:15:43.255 4.925 - 4.953: 99.4241% ( 1) 00:15:43.255 5.009 - 5.037: 99.4302% ( 1) 00:15:43.255 5.092 - 5.120: 99.4363% ( 1) 00:15:43.255 5.120 - 5.148: 99.4423% ( 1) 00:15:43.255 5.203 - 5.231: 99.4484% ( 1) 00:15:43.255 5.259 - 5.287: 99.4544% ( 1) 00:15:43.255 5.398 - 5.426: 99.4666% ( 2) 00:15:43.255 5.593 - 5.621: 99.4726% ( 1) 00:15:43.255 5.649 - 5.677: 99.4787% ( 1) 00:15:43.255 5.677 - 5.704: 99.4848% ( 1) 00:15:43.255 5.927 - 5.955: 99.4908% ( 1) 00:15:43.255 5.955 - 5.983: 99.4969% ( 1) 00:15:43.255 6.066 - 6.094: 99.5029% ( 1) 00:15:43.255 6.122 - 6.150: 99.5090% ( 1) 00:15:43.255 6.205 - 6.233: 99.5151% ( 1) 00:15:43.255 6.595 - 6.623: 99.5211% ( 1) 00:15:43.255 6.873 - 6.901: 99.5272% ( 1) 00:15:43.255 7.040 - 7.068: 99.5332% ( 1) 00:15:43.255 7.068 - 7.096: 99.5393% ( 1) 00:15:43.255 7.123 - 7.179: 99.5454% ( 1) 00:15:43.255 8.181 - 8.237: 99.5514% ( 1) 00:15:43.255 8.793 - 8.849: 99.5575% ( 1) 00:15:43.255 11.965 - 12.021: 99.5636% ( 1) 00:15:43.255 12.077 - 12.132: 99.5696% ( 1) 00:15:43.255 15.694 - 15.805: 99.5757% ( 1) 00:15:43.255 1011.534 - 1018.657: 99.5817% ( 1) 00:15:43.255 1025.781 - 1032.904: 99.5878% ( 1) 00:15:43.255 1032.904 - 1040.028: 99.5939% ( 1) 00:15:43.255 3989.148 - 4017.642: 99.9818% ( 64) 00:15:43.255 6981.009 - 7009.503: 100.0000% ( 3) 00:15:43.255 00:15:43.255 22:14:38 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:43.255 22:14:38 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.255 22:14:38 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.255 22:14:38 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:43.255 22:14:38 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.516 [ 00:15:43.516 { 00:15:43.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.516 "subtype": "Discovery", 00:15:43.516 "listen_addresses": [], 00:15:43.516 "allow_any_host": true, 00:15:43.516 "hosts": [] 00:15:43.516 }, 00:15:43.516 { 00:15:43.516 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.516 "subtype": "NVMe", 00:15:43.516 "listen_addresses": [ 00:15:43.516 { 00:15:43.516 "transport": "VFIOUSER", 00:15:43.516 "trtype": "VFIOUSER", 00:15:43.516 "adrfam": "IPv4", 00:15:43.516 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.516 "trsvcid": "0" 00:15:43.516 } 00:15:43.516 ], 00:15:43.516 "allow_any_host": true, 00:15:43.516 "hosts": [], 00:15:43.516 "serial_number": "SPDK1", 00:15:43.516 "model_number": "SPDK bdev Controller", 00:15:43.516 "max_namespaces": 32, 00:15:43.516 "min_cntlid": 1, 00:15:43.516 "max_cntlid": 65519, 00:15:43.516 "namespaces": [ 00:15:43.516 { 00:15:43.516 "nsid": 1, 00:15:43.516 "bdev_name": "Malloc1", 00:15:43.516 "name": "Malloc1", 00:15:43.516 "nguid": "FCBC49ED32684D4AB74CE8B071FB53C8", 00:15:43.516 "uuid": "fcbc49ed-3268-4d4a-b74c-e8b071fb53c8" 00:15:43.516 }, 00:15:43.516 { 00:15:43.516 "nsid": 2, 00:15:43.516 "bdev_name": "Malloc3", 00:15:43.516 "name": "Malloc3", 00:15:43.516 "nguid": "9937446DCEE04ADBBAE2D8238AF2ED19", 00:15:43.516 "uuid": "9937446d-cee0-4adb-bae2-d8238af2ed19" 00:15:43.516 } 00:15:43.516 ] 00:15:43.516 }, 00:15:43.516 { 00:15:43.516 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.516 "subtype": "NVMe", 00:15:43.516 "listen_addresses": [ 00:15:43.516 { 00:15:43.516 "transport": "VFIOUSER", 00:15:43.516 "trtype": "VFIOUSER", 00:15:43.516 "adrfam": "IPv4", 00:15:43.516 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.516 "trsvcid": "0" 00:15:43.516 } 00:15:43.516 ], 00:15:43.516 "allow_any_host": true, 00:15:43.516 "hosts": [], 00:15:43.516 "serial_number": "SPDK2", 00:15:43.516 "model_number": "SPDK bdev Controller", 00:15:43.516 "max_namespaces": 32, 00:15:43.516 "min_cntlid": 1, 00:15:43.516 "max_cntlid": 65519, 00:15:43.516 "namespaces": [ 00:15:43.516 { 00:15:43.516 "nsid": 1, 00:15:43.516 "bdev_name": "Malloc2", 00:15:43.516 "name": "Malloc2", 00:15:43.516 "nguid": "5B7C8FD0564A4F96AC3FEEEEE9C29317", 00:15:43.516 "uuid": "5b7c8fd0-564a-4f96-ac3f-eeeee9c29317" 00:15:43.516 } 00:15:43.516 ] 00:15:43.516 } 00:15:43.516 ] 00:15:43.516 22:14:38 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:43.516 22:14:38 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:43.516 22:14:38 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3517899 00:15:43.516 22:14:38 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:43.516 22:14:38 -- common/autotest_common.sh@1244 -- # local i=0 00:15:43.516 22:14:38 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.516 22:14:38 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.516 22:14:38 -- common/autotest_common.sh@1255 -- # return 0 00:15:43.516 22:14:38 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:43.516 22:14:38 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:43.516 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.516 Malloc4 00:15:43.516 22:14:38 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:43.776 22:14:38 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.776 Asynchronous Event Request test 00:15:43.776 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.776 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.776 Registering asynchronous event callbacks... 00:15:43.776 Starting namespace attribute notice tests for all controllers... 00:15:43.776 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:43.776 aer_cb - Changed Namespace 00:15:43.776 Cleaning up... 00:15:44.036 [ 00:15:44.036 { 00:15:44.036 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:44.036 "subtype": "Discovery", 00:15:44.036 "listen_addresses": [], 00:15:44.036 "allow_any_host": true, 00:15:44.036 "hosts": [] 00:15:44.036 }, 00:15:44.036 { 00:15:44.036 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:44.036 "subtype": "NVMe", 00:15:44.036 "listen_addresses": [ 00:15:44.036 { 00:15:44.036 "transport": "VFIOUSER", 00:15:44.036 "trtype": "VFIOUSER", 00:15:44.036 "adrfam": "IPv4", 00:15:44.036 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:44.036 "trsvcid": "0" 00:15:44.036 } 00:15:44.036 ], 00:15:44.036 "allow_any_host": true, 00:15:44.036 "hosts": [], 00:15:44.036 "serial_number": "SPDK1", 00:15:44.036 "model_number": "SPDK bdev Controller", 00:15:44.036 "max_namespaces": 32, 00:15:44.036 "min_cntlid": 1, 00:15:44.036 "max_cntlid": 65519, 00:15:44.036 "namespaces": [ 00:15:44.036 { 00:15:44.036 "nsid": 1, 00:15:44.036 "bdev_name": "Malloc1", 00:15:44.036 "name": "Malloc1", 00:15:44.036 "nguid": "FCBC49ED32684D4AB74CE8B071FB53C8", 00:15:44.036 "uuid": "fcbc49ed-3268-4d4a-b74c-e8b071fb53c8" 00:15:44.036 }, 00:15:44.036 { 00:15:44.036 "nsid": 2, 00:15:44.036 "bdev_name": "Malloc3", 00:15:44.036 "name": "Malloc3", 00:15:44.036 "nguid": "9937446DCEE04ADBBAE2D8238AF2ED19", 00:15:44.036 "uuid": "9937446d-cee0-4adb-bae2-d8238af2ed19" 00:15:44.036 } 00:15:44.036 ] 00:15:44.036 }, 00:15:44.036 { 00:15:44.036 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:44.036 "subtype": "NVMe", 00:15:44.036 "listen_addresses": [ 00:15:44.036 { 00:15:44.036 "transport": "VFIOUSER", 00:15:44.036 "trtype": "VFIOUSER", 00:15:44.036 "adrfam": "IPv4", 00:15:44.036 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:44.036 "trsvcid": "0" 00:15:44.036 } 00:15:44.036 ], 00:15:44.036 "allow_any_host": true, 00:15:44.036 "hosts": [], 00:15:44.036 "serial_number": "SPDK2", 00:15:44.036 "model_number": "SPDK bdev Controller", 00:15:44.036 "max_namespaces": 32, 00:15:44.036 "min_cntlid": 1, 00:15:44.036 "max_cntlid": 65519, 00:15:44.036 "namespaces": [ 00:15:44.036 { 00:15:44.036 "nsid": 1, 00:15:44.036 "bdev_name": "Malloc2", 00:15:44.036 "name": "Malloc2", 00:15:44.036 "nguid": "5B7C8FD0564A4F96AC3FEEEEE9C29317", 00:15:44.036 "uuid": "5b7c8fd0-564a-4f96-ac3f-eeeee9c29317" 00:15:44.036 }, 00:15:44.036 { 00:15:44.036 "nsid": 2, 00:15:44.036 "bdev_name": "Malloc4", 00:15:44.037 "name": "Malloc4", 00:15:44.037 "nguid": "8CCB450C6F6F4212BF51036A23848F36", 00:15:44.037 "uuid": "8ccb450c-6f6f-4212-bf51-036a23848f36" 00:15:44.037 } 00:15:44.037 ] 00:15:44.037 } 00:15:44.037 ] 00:15:44.037 22:14:38 -- target/nvmf_vfio_user.sh@44 -- # wait 3517899 00:15:44.037 22:14:38 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:44.037 22:14:38 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3510143 00:15:44.037 22:14:39 -- common/autotest_common.sh@926 -- # '[' -z 3510143 ']' 00:15:44.037 22:14:39 -- common/autotest_common.sh@930 -- # kill -0 3510143 00:15:44.037 22:14:39 -- common/autotest_common.sh@931 -- # uname 00:15:44.037 22:14:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:44.037 22:14:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3510143 00:15:44.037 22:14:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:44.037 22:14:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:44.037 22:14:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3510143' 00:15:44.037 killing process with pid 3510143 00:15:44.037 22:14:39 -- common/autotest_common.sh@945 -- # kill 3510143 00:15:44.037 [2024-07-24 22:14:39.046393] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:44.037 22:14:39 -- common/autotest_common.sh@950 -- # wait 3510143 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3518138 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3518138' 00:15:44.297 Process pid: 3518138 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.297 22:14:39 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3518138 00:15:44.297 22:14:39 -- common/autotest_common.sh@819 -- # '[' -z 3518138 ']' 00:15:44.297 22:14:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.297 22:14:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:44.297 22:14:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.297 22:14:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:44.297 22:14:39 -- common/autotest_common.sh@10 -- # set +x 00:15:44.297 [2024-07-24 22:14:39.344342] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:44.297 [2024-07-24 22:14:39.345210] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:44.297 [2024-07-24 22:14:39.345249] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.297 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.297 [2024-07-24 22:14:39.399895] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.557 [2024-07-24 22:14:39.436844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:44.557 [2024-07-24 22:14:39.436966] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.557 [2024-07-24 22:14:39.436978] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.557 [2024-07-24 22:14:39.436986] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.557 [2024-07-24 22:14:39.437087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.557 [2024-07-24 22:14:39.437130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.557 [2024-07-24 22:14:39.437219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.557 [2024-07-24 22:14:39.437222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.558 [2024-07-24 22:14:39.506210] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:15:44.558 [2024-07-24 22:14:39.506307] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:15:44.558 [2024-07-24 22:14:39.506483] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:15:44.558 [2024-07-24 22:14:39.506991] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:44.558 [2024-07-24 22:14:39.507100] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:15:45.129 22:14:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:45.129 22:14:40 -- common/autotest_common.sh@852 -- # return 0 00:15:45.129 22:14:40 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:46.069 22:14:41 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:46.330 22:14:41 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:46.330 22:14:41 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:46.330 22:14:41 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:46.330 22:14:41 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:46.330 22:14:41 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:46.590 Malloc1 00:15:46.590 22:14:41 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:46.590 22:14:41 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:46.850 22:14:41 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:47.110 22:14:42 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:47.110 22:14:42 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:47.110 22:14:42 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:47.110 Malloc2 00:15:47.110 22:14:42 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:47.370 22:14:42 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:47.630 22:14:42 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:47.890 22:14:42 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:47.890 22:14:42 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3518138 00:15:47.890 22:14:42 -- common/autotest_common.sh@926 -- # '[' -z 3518138 ']' 00:15:47.890 22:14:42 -- common/autotest_common.sh@930 -- # kill -0 3518138 00:15:47.890 22:14:42 -- common/autotest_common.sh@931 -- # uname 00:15:47.890 22:14:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:47.890 22:14:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3518138 00:15:47.890 22:14:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:47.890 22:14:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:47.890 22:14:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3518138' 00:15:47.890 killing process with pid 3518138 00:15:47.890 22:14:42 -- common/autotest_common.sh@945 -- # kill 3518138 00:15:47.890 22:14:42 -- common/autotest_common.sh@950 -- # wait 3518138 00:15:47.890 22:14:43 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:48.152 22:14:43 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:48.152 00:15:48.152 real 0m51.093s 00:15:48.152 user 3m22.518s 00:15:48.152 sys 0m3.483s 00:15:48.152 22:14:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.152 22:14:43 -- common/autotest_common.sh@10 -- # set +x 00:15:48.152 ************************************ 00:15:48.152 END TEST nvmf_vfio_user 00:15:48.152 ************************************ 00:15:48.152 22:14:43 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:48.152 22:14:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:48.152 22:14:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:48.152 22:14:43 -- common/autotest_common.sh@10 -- # set +x 00:15:48.152 ************************************ 00:15:48.152 START TEST nvmf_vfio_user_nvme_compliance 00:15:48.152 ************************************ 00:15:48.152 22:14:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:48.152 * Looking for test storage... 00:15:48.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:48.152 22:14:43 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.152 22:14:43 -- nvmf/common.sh@7 -- # uname -s 00:15:48.152 22:14:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.152 22:14:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.152 22:14:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.152 22:14:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.152 22:14:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.152 22:14:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.152 22:14:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.152 22:14:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.152 22:14:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.152 22:14:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.152 22:14:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.152 22:14:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:48.152 22:14:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.152 22:14:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.152 22:14:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.152 22:14:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.152 22:14:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.152 22:14:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.152 22:14:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.152 22:14:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.152 22:14:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.152 22:14:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.152 22:14:43 -- paths/export.sh@5 -- # export PATH 00:15:48.152 22:14:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.152 22:14:43 -- nvmf/common.sh@46 -- # : 0 00:15:48.152 22:14:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:48.152 22:14:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:48.152 22:14:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:48.152 22:14:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.152 22:14:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.152 22:14:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:48.152 22:14:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:48.152 22:14:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:48.152 22:14:43 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.152 22:14:43 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.152 22:14:43 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:48.152 22:14:43 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:48.152 22:14:43 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:48.152 22:14:43 -- compliance/compliance.sh@20 -- # nvmfpid=3518903 00:15:48.152 22:14:43 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3518903' 00:15:48.152 Process pid: 3518903 00:15:48.152 22:14:43 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:48.152 22:14:43 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:48.152 22:14:43 -- compliance/compliance.sh@24 -- # waitforlisten 3518903 00:15:48.152 22:14:43 -- common/autotest_common.sh@819 -- # '[' -z 3518903 ']' 00:15:48.152 22:14:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.152 22:14:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:48.152 22:14:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.153 22:14:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:48.153 22:14:43 -- common/autotest_common.sh@10 -- # set +x 00:15:48.153 [2024-07-24 22:14:43.227241] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:15:48.153 [2024-07-24 22:14:43.227290] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.153 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.153 [2024-07-24 22:14:43.278812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:48.421 [2024-07-24 22:14:43.317344] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:48.421 [2024-07-24 22:14:43.317476] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.421 [2024-07-24 22:14:43.317487] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.421 [2024-07-24 22:14:43.317496] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.421 [2024-07-24 22:14:43.317593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.421 [2024-07-24 22:14:43.317693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:48.421 [2024-07-24 22:14:43.317696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.991 22:14:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:48.991 22:14:44 -- common/autotest_common.sh@852 -- # return 0 00:15:48.991 22:14:44 -- compliance/compliance.sh@26 -- # sleep 1 00:15:49.932 22:14:45 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:49.932 22:14:45 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:49.932 22:14:45 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:49.932 22:14:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.932 22:14:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.932 22:14:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.932 22:14:45 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:49.932 22:14:45 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:49.932 22:14:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:49.932 22:14:45 -- common/autotest_common.sh@10 -- # set +x 00:15:49.932 malloc0 00:15:49.932 22:14:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:49.932 22:14:45 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:49.932 22:14:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.192 22:14:45 -- common/autotest_common.sh@10 -- # set +x 00:15:50.192 22:14:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.192 22:14:45 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:50.192 22:14:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.192 22:14:45 -- common/autotest_common.sh@10 -- # set +x 00:15:50.192 22:14:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.192 22:14:45 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:50.192 22:14:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:50.192 22:14:45 -- common/autotest_common.sh@10 -- # set +x 00:15:50.192 22:14:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:50.192 22:14:45 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:50.192 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.192 00:15:50.192 00:15:50.192 CUnit - A unit testing framework for C - Version 2.1-3 00:15:50.192 http://cunit.sourceforge.net/ 00:15:50.192 00:15:50.192 00:15:50.192 Suite: nvme_compliance 00:15:50.192 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 22:14:45.240917] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:50.192 [2024-07-24 22:14:45.240955] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:50.192 [2024-07-24 22:14:45.240962] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:50.192 passed 00:15:50.452 Test: admin_identify_ctrlr_verify_fused ...passed 00:15:50.452 Test: admin_identify_ns ...[2024-07-24 22:14:45.469054] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:50.452 [2024-07-24 22:14:45.477055] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:50.452 passed 00:15:50.712 Test: admin_get_features_mandatory_features ...passed 00:15:50.712 Test: admin_get_features_optional_features ...passed 00:15:50.971 Test: admin_set_features_number_of_queues ...passed 00:15:50.971 Test: admin_get_log_page_mandatory_logs ...passed 00:15:50.971 Test: admin_get_log_page_with_lpo ...[2024-07-24 22:14:46.078057] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:51.230 passed 00:15:51.230 Test: fabric_property_get ...passed 00:15:51.230 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 22:14:46.253059] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:51.230 passed 00:15:51.531 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 22:14:46.420047] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.531 [2024-07-24 22:14:46.436052] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.531 passed 00:15:51.531 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 22:14:46.524076] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:51.531 passed 00:15:51.811 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 22:14:46.683054] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:51.811 [2024-07-24 22:14:46.707054] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:51.811 passed 00:15:51.811 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 22:14:46.795098] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:51.811 [2024-07-24 22:14:46.795131] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:51.811 passed 00:15:52.070 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 22:14:46.967054] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:52.070 [2024-07-24 22:14:46.975052] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:52.070 [2024-07-24 22:14:46.983050] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:52.070 [2024-07-24 22:14:46.991059] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:52.070 passed 00:15:52.071 Test: admin_create_io_sq_verify_pc ...[2024-07-24 22:14:47.115060] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:52.071 passed 00:15:53.450 Test: admin_create_io_qp_max_qps ...[2024-07-24 22:14:48.314053] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:53.709 passed 00:15:53.969 Test: admin_create_io_sq_shared_cq ...[2024-07-24 22:14:48.901061] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.969 passed 00:15:53.969 00:15:53.969 Run Summary: Type Total Ran Passed Failed Inactive 00:15:53.969 suites 1 1 n/a 0 0 00:15:53.969 tests 18 18 18 0 0 00:15:53.969 asserts 360 360 360 0 n/a 00:15:53.969 00:15:53.969 Elapsed time = 1.525 seconds 00:15:53.969 22:14:48 -- compliance/compliance.sh@42 -- # killprocess 3518903 00:15:53.969 22:14:48 -- common/autotest_common.sh@926 -- # '[' -z 3518903 ']' 00:15:53.969 22:14:48 -- common/autotest_common.sh@930 -- # kill -0 3518903 00:15:53.969 22:14:48 -- common/autotest_common.sh@931 -- # uname 00:15:53.969 22:14:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:53.969 22:14:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3518903 00:15:53.969 22:14:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:53.969 22:14:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:53.969 22:14:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3518903' 00:15:53.969 killing process with pid 3518903 00:15:53.969 22:14:49 -- common/autotest_common.sh@945 -- # kill 3518903 00:15:53.969 22:14:49 -- common/autotest_common.sh@950 -- # wait 3518903 00:15:54.229 22:14:49 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:54.229 22:14:49 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:54.229 00:15:54.229 real 0m6.147s 00:15:54.229 user 0m17.688s 00:15:54.229 sys 0m0.427s 00:15:54.229 22:14:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:54.229 22:14:49 -- common/autotest_common.sh@10 -- # set +x 00:15:54.229 ************************************ 00:15:54.229 END TEST nvmf_vfio_user_nvme_compliance 00:15:54.229 ************************************ 00:15:54.229 22:14:49 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:54.229 22:14:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:54.229 22:14:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:54.229 22:14:49 -- common/autotest_common.sh@10 -- # set +x 00:15:54.229 ************************************ 00:15:54.229 START TEST nvmf_vfio_user_fuzz 00:15:54.229 ************************************ 00:15:54.229 22:14:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:54.229 * Looking for test storage... 00:15:54.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:54.229 22:14:49 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.229 22:14:49 -- nvmf/common.sh@7 -- # uname -s 00:15:54.229 22:14:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.229 22:14:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.229 22:14:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.229 22:14:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.229 22:14:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.229 22:14:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.229 22:14:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.229 22:14:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.229 22:14:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.229 22:14:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.229 22:14:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.229 22:14:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.229 22:14:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.229 22:14:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.229 22:14:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.229 22:14:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:54.229 22:14:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.229 22:14:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.229 22:14:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.229 22:14:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.229 22:14:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.229 22:14:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.229 22:14:49 -- paths/export.sh@5 -- # export PATH 00:15:54.229 22:14:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.229 22:14:49 -- nvmf/common.sh@46 -- # : 0 00:15:54.229 22:14:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:54.229 22:14:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:54.229 22:14:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:54.229 22:14:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.230 22:14:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.230 22:14:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:54.230 22:14:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:54.230 22:14:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3519907 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3519907' 00:15:54.230 Process pid: 3519907 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3519907 00:15:54.230 22:14:49 -- common/autotest_common.sh@819 -- # '[' -z 3519907 ']' 00:15:54.230 22:14:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.230 22:14:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:54.230 22:14:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.230 22:14:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:54.230 22:14:49 -- common/autotest_common.sh@10 -- # set +x 00:15:54.230 22:14:49 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:55.170 22:14:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:55.170 22:14:50 -- common/autotest_common.sh@852 -- # return 0 00:15:55.170 22:14:50 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:56.110 22:14:51 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:56.110 22:14:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.110 22:14:51 -- common/autotest_common.sh@10 -- # set +x 00:15:56.110 22:14:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.110 22:14:51 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:56.110 22:14:51 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:56.110 22:14:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.110 22:14:51 -- common/autotest_common.sh@10 -- # set +x 00:15:56.110 malloc0 00:15:56.110 22:14:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.110 22:14:51 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:56.110 22:14:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.110 22:14:51 -- common/autotest_common.sh@10 -- # set +x 00:15:56.110 22:14:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.110 22:14:51 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:56.110 22:14:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.110 22:14:51 -- common/autotest_common.sh@10 -- # set +x 00:15:56.110 22:14:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.110 22:14:51 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:56.110 22:14:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:56.110 22:14:51 -- common/autotest_common.sh@10 -- # set +x 00:15:56.370 22:14:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:56.370 22:14:51 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:56.370 22:14:51 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:28.459 Fuzzing completed. Shutting down the fuzz application 00:16:28.459 00:16:28.459 Dumping successful admin opcodes: 00:16:28.459 8, 9, 10, 24, 00:16:28.459 Dumping successful io opcodes: 00:16:28.459 0, 00:16:28.459 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1158558, total successful commands: 4557, random_seed: 3545841088 00:16:28.459 NS: 0x200003a1ef00 admin qp, Total commands completed: 267894, total successful commands: 2157, random_seed: 4092348992 00:16:28.459 22:15:21 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:28.459 22:15:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.459 22:15:21 -- common/autotest_common.sh@10 -- # set +x 00:16:28.459 22:15:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.459 22:15:21 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3519907 00:16:28.459 22:15:21 -- common/autotest_common.sh@926 -- # '[' -z 3519907 ']' 00:16:28.459 22:15:21 -- common/autotest_common.sh@930 -- # kill -0 3519907 00:16:28.459 22:15:21 -- common/autotest_common.sh@931 -- # uname 00:16:28.459 22:15:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.459 22:15:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3519907 00:16:28.459 22:15:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:28.459 22:15:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:28.459 22:15:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3519907' 00:16:28.459 killing process with pid 3519907 00:16:28.459 22:15:21 -- common/autotest_common.sh@945 -- # kill 3519907 00:16:28.459 22:15:21 -- common/autotest_common.sh@950 -- # wait 3519907 00:16:28.459 22:15:21 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:28.459 22:15:21 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:28.459 00:16:28.459 real 0m32.642s 00:16:28.459 user 0m37.306s 00:16:28.459 sys 0m24.972s 00:16:28.459 22:15:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.459 22:15:21 -- common/autotest_common.sh@10 -- # set +x 00:16:28.459 ************************************ 00:16:28.459 END TEST nvmf_vfio_user_fuzz 00:16:28.459 ************************************ 00:16:28.459 22:15:21 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:28.459 22:15:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:28.459 22:15:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:28.459 22:15:21 -- common/autotest_common.sh@10 -- # set +x 00:16:28.459 ************************************ 00:16:28.459 START TEST nvmf_host_management 00:16:28.459 ************************************ 00:16:28.459 22:15:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:28.459 * Looking for test storage... 00:16:28.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.459 22:15:22 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.459 22:15:22 -- nvmf/common.sh@7 -- # uname -s 00:16:28.459 22:15:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.459 22:15:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.459 22:15:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.459 22:15:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.459 22:15:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.459 22:15:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.459 22:15:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.459 22:15:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.459 22:15:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.459 22:15:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.459 22:15:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.459 22:15:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.459 22:15:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.459 22:15:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.459 22:15:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.459 22:15:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.459 22:15:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.459 22:15:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.459 22:15:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.459 22:15:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.459 22:15:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.459 22:15:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.459 22:15:22 -- paths/export.sh@5 -- # export PATH 00:16:28.459 22:15:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.459 22:15:22 -- nvmf/common.sh@46 -- # : 0 00:16:28.459 22:15:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:28.459 22:15:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:28.459 22:15:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:28.459 22:15:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.459 22:15:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.459 22:15:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:28.459 22:15:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:28.459 22:15:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:28.459 22:15:22 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.459 22:15:22 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.459 22:15:22 -- target/host_management.sh@104 -- # nvmftestinit 00:16:28.459 22:15:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:28.459 22:15:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.459 22:15:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:28.459 22:15:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:28.459 22:15:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:28.459 22:15:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.459 22:15:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.459 22:15:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.459 22:15:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:28.459 22:15:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:28.460 22:15:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:28.460 22:15:22 -- common/autotest_common.sh@10 -- # set +x 00:16:32.654 22:15:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:32.654 22:15:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:32.654 22:15:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:32.654 22:15:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:32.654 22:15:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:32.654 22:15:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:32.654 22:15:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:32.654 22:15:27 -- nvmf/common.sh@294 -- # net_devs=() 00:16:32.654 22:15:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:32.654 22:15:27 -- nvmf/common.sh@295 -- # e810=() 00:16:32.654 22:15:27 -- nvmf/common.sh@295 -- # local -ga e810 00:16:32.654 22:15:27 -- nvmf/common.sh@296 -- # x722=() 00:16:32.654 22:15:27 -- nvmf/common.sh@296 -- # local -ga x722 00:16:32.655 22:15:27 -- nvmf/common.sh@297 -- # mlx=() 00:16:32.655 22:15:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:32.655 22:15:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.655 22:15:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:32.655 22:15:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:32.655 22:15:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:32.655 22:15:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:32.655 22:15:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:32.655 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:32.655 22:15:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:32.655 22:15:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:32.655 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:32.655 22:15:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:32.655 22:15:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:32.655 22:15:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.655 22:15:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:32.655 22:15:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.655 22:15:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:32.655 Found net devices under 0000:86:00.0: cvl_0_0 00:16:32.655 22:15:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.655 22:15:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:32.655 22:15:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.655 22:15:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:32.655 22:15:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.655 22:15:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:32.655 Found net devices under 0000:86:00.1: cvl_0_1 00:16:32.655 22:15:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.655 22:15:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:32.655 22:15:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:32.655 22:15:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:32.655 22:15:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.655 22:15:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.655 22:15:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:32.655 22:15:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:32.655 22:15:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:32.655 22:15:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:32.655 22:15:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:32.655 22:15:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:32.655 22:15:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.655 22:15:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:32.655 22:15:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:32.655 22:15:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:32.655 22:15:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:32.655 22:15:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:32.655 22:15:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:32.655 22:15:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:32.655 22:15:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:32.655 22:15:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:32.655 22:15:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:32.655 22:15:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:32.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:32.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:16:32.655 00:16:32.655 --- 10.0.0.2 ping statistics --- 00:16:32.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.655 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:16:32.655 22:15:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:32.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:32.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:16:32.655 00:16:32.655 --- 10.0.0.1 ping statistics --- 00:16:32.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:32.655 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:16:32.655 22:15:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:32.655 22:15:27 -- nvmf/common.sh@410 -- # return 0 00:16:32.655 22:15:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:32.655 22:15:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:32.655 22:15:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:32.655 22:15:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:32.655 22:15:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:32.655 22:15:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:32.655 22:15:27 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:32.655 22:15:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:32.655 22:15:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:32.655 22:15:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.655 ************************************ 00:16:32.655 START TEST nvmf_host_management 00:16:32.655 ************************************ 00:16:32.655 22:15:27 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:32.655 22:15:27 -- target/host_management.sh@69 -- # starttarget 00:16:32.655 22:15:27 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:32.655 22:15:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:32.655 22:15:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:32.655 22:15:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.655 22:15:27 -- nvmf/common.sh@469 -- # nvmfpid=3528819 00:16:32.655 22:15:27 -- nvmf/common.sh@470 -- # waitforlisten 3528819 00:16:32.655 22:15:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:32.655 22:15:27 -- common/autotest_common.sh@819 -- # '[' -z 3528819 ']' 00:16:32.655 22:15:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.655 22:15:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.655 22:15:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.655 22:15:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.655 22:15:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.655 [2024-07-24 22:15:27.443697] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:32.655 [2024-07-24 22:15:27.443738] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:32.655 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.655 [2024-07-24 22:15:27.500746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:32.655 [2024-07-24 22:15:27.541999] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:32.655 [2024-07-24 22:15:27.542114] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:32.655 [2024-07-24 22:15:27.542122] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:32.655 [2024-07-24 22:15:27.542128] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:32.655 [2024-07-24 22:15:27.542177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:32.655 [2024-07-24 22:15:27.542243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:32.655 [2024-07-24 22:15:27.542351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.655 [2024-07-24 22:15:27.542353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:33.225 22:15:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:33.225 22:15:28 -- common/autotest_common.sh@852 -- # return 0 00:16:33.225 22:15:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:33.225 22:15:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:33.225 22:15:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.225 22:15:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.225 22:15:28 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:33.225 22:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.225 22:15:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.225 [2024-07-24 22:15:28.287466] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.225 22:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.225 22:15:28 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:33.225 22:15:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:33.225 22:15:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.225 22:15:28 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:33.225 22:15:28 -- target/host_management.sh@23 -- # cat 00:16:33.225 22:15:28 -- target/host_management.sh@30 -- # rpc_cmd 00:16:33.225 22:15:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.225 22:15:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.225 Malloc0 00:16:33.225 [2024-07-24 22:15:28.347342] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:33.485 22:15:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.485 22:15:28 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:33.485 22:15:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:33.485 22:15:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.485 22:15:28 -- target/host_management.sh@73 -- # perfpid=3529087 00:16:33.485 22:15:28 -- target/host_management.sh@74 -- # waitforlisten 3529087 /var/tmp/bdevperf.sock 00:16:33.485 22:15:28 -- common/autotest_common.sh@819 -- # '[' -z 3529087 ']' 00:16:33.485 22:15:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:33.485 22:15:28 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:33.485 22:15:28 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:33.485 22:15:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:33.485 22:15:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:33.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:33.485 22:15:28 -- nvmf/common.sh@520 -- # config=() 00:16:33.485 22:15:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:33.485 22:15:28 -- nvmf/common.sh@520 -- # local subsystem config 00:16:33.485 22:15:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.485 22:15:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:33.485 22:15:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:33.485 { 00:16:33.485 "params": { 00:16:33.486 "name": "Nvme$subsystem", 00:16:33.486 "trtype": "$TEST_TRANSPORT", 00:16:33.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:33.486 "adrfam": "ipv4", 00:16:33.486 "trsvcid": "$NVMF_PORT", 00:16:33.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:33.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:33.486 "hdgst": ${hdgst:-false}, 00:16:33.486 "ddgst": ${ddgst:-false} 00:16:33.486 }, 00:16:33.486 "method": "bdev_nvme_attach_controller" 00:16:33.486 } 00:16:33.486 EOF 00:16:33.486 )") 00:16:33.486 22:15:28 -- nvmf/common.sh@542 -- # cat 00:16:33.486 22:15:28 -- nvmf/common.sh@544 -- # jq . 00:16:33.486 22:15:28 -- nvmf/common.sh@545 -- # IFS=, 00:16:33.486 22:15:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:33.486 "params": { 00:16:33.486 "name": "Nvme0", 00:16:33.486 "trtype": "tcp", 00:16:33.486 "traddr": "10.0.0.2", 00:16:33.486 "adrfam": "ipv4", 00:16:33.486 "trsvcid": "4420", 00:16:33.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:33.486 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:33.486 "hdgst": false, 00:16:33.486 "ddgst": false 00:16:33.486 }, 00:16:33.486 "method": "bdev_nvme_attach_controller" 00:16:33.486 }' 00:16:33.486 [2024-07-24 22:15:28.435770] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:33.486 [2024-07-24 22:15:28.435819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529087 ] 00:16:33.486 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.486 [2024-07-24 22:15:28.491387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.486 [2024-07-24 22:15:28.529229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.745 Running I/O for 10 seconds... 00:16:34.318 22:15:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:34.318 22:15:29 -- common/autotest_common.sh@852 -- # return 0 00:16:34.318 22:15:29 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:34.318 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.318 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:16:34.318 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.318 22:15:29 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:34.318 22:15:29 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:34.318 22:15:29 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:34.318 22:15:29 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:34.318 22:15:29 -- target/host_management.sh@52 -- # local ret=1 00:16:34.318 22:15:29 -- target/host_management.sh@53 -- # local i 00:16:34.318 22:15:29 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:34.318 22:15:29 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:34.318 22:15:29 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:34.318 22:15:29 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:34.318 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.318 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:16:34.318 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.318 22:15:29 -- target/host_management.sh@55 -- # read_io_count=1010 00:16:34.318 22:15:29 -- target/host_management.sh@58 -- # '[' 1010 -ge 100 ']' 00:16:34.318 22:15:29 -- target/host_management.sh@59 -- # ret=0 00:16:34.318 22:15:29 -- target/host_management.sh@60 -- # break 00:16:34.318 22:15:29 -- target/host_management.sh@64 -- # return 0 00:16:34.318 22:15:29 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:34.318 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.318 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:16:34.318 [2024-07-24 22:15:29.306811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.306998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.318 [2024-07-24 22:15:29.307163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.319 [2024-07-24 22:15:29.307169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.319 [2024-07-24 22:15:29.307174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.319 [2024-07-24 22:15:29.307180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.319 [2024-07-24 22:15:29.307186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d46020 is same with the state(5) to be set 00:16:34.319 [2024-07-24 22:15:29.307870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.307901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.307917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.307926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.307935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.307943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.307951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.307959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.307967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.307978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.307987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.307994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.319 [2024-07-24 22:15:29.308467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.319 [2024-07-24 22:15:29.308474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:34.320 [2024-07-24 22:15:29.308919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:34.320 [2024-07-24 22:15:29.308927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f3970 is same with the state(5) to be set 00:16:34.320 [2024-07-24 22:15:29.308978] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f3970 was disconnected and freed. reset controller. 00:16:34.320 [2024-07-24 22:15:29.309885] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:34.320 task offset: 12800 on job bdev=Nvme0n1 fails 00:16:34.320 00:16:34.320 Latency(us) 00:16:34.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.320 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:34.320 Job: Nvme0n1 ended in about 0.46 seconds with error 00:16:34.320 Verification LBA range: start 0x0 length 0x400 00:16:34.320 Nvme0n1 : 0.46 2379.67 148.73 138.20 0.00 25091.84 2892.13 53568.56 00:16:34.320 =================================================================================================================== 00:16:34.320 Total : 2379.67 148.73 138.20 0.00 25091.84 2892.13 53568.56 00:16:34.320 [2024-07-24 22:15:29.311476] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:34.320 [2024-07-24 22:15:29.311492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f9510 (9): Bad file descriptor 00:16:34.320 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.320 22:15:29 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:34.320 22:15:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:34.320 22:15:29 -- common/autotest_common.sh@10 -- # set +x 00:16:34.320 22:15:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:34.320 22:15:29 -- target/host_management.sh@87 -- # sleep 1 00:16:34.320 [2024-07-24 22:15:29.415305] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:35.261 22:15:30 -- target/host_management.sh@91 -- # kill -9 3529087 00:16:35.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3529087) - No such process 00:16:35.261 22:15:30 -- target/host_management.sh@91 -- # true 00:16:35.261 22:15:30 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:35.261 22:15:30 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:35.261 22:15:30 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:35.261 22:15:30 -- nvmf/common.sh@520 -- # config=() 00:16:35.261 22:15:30 -- nvmf/common.sh@520 -- # local subsystem config 00:16:35.261 22:15:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:35.261 22:15:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:35.261 { 00:16:35.261 "params": { 00:16:35.261 "name": "Nvme$subsystem", 00:16:35.261 "trtype": "$TEST_TRANSPORT", 00:16:35.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.261 "adrfam": "ipv4", 00:16:35.261 "trsvcid": "$NVMF_PORT", 00:16:35.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.261 "hdgst": ${hdgst:-false}, 00:16:35.261 "ddgst": ${ddgst:-false} 00:16:35.261 }, 00:16:35.261 "method": "bdev_nvme_attach_controller" 00:16:35.261 } 00:16:35.261 EOF 00:16:35.261 )") 00:16:35.261 22:15:30 -- nvmf/common.sh@542 -- # cat 00:16:35.261 22:15:30 -- nvmf/common.sh@544 -- # jq . 00:16:35.261 22:15:30 -- nvmf/common.sh@545 -- # IFS=, 00:16:35.261 22:15:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:35.261 "params": { 00:16:35.261 "name": "Nvme0", 00:16:35.261 "trtype": "tcp", 00:16:35.261 "traddr": "10.0.0.2", 00:16:35.261 "adrfam": "ipv4", 00:16:35.261 "trsvcid": "4420", 00:16:35.261 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:35.262 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:35.262 "hdgst": false, 00:16:35.262 "ddgst": false 00:16:35.262 }, 00:16:35.262 "method": "bdev_nvme_attach_controller" 00:16:35.262 }' 00:16:35.262 [2024-07-24 22:15:30.375323] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:35.262 [2024-07-24 22:15:30.375375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529359 ] 00:16:35.521 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.521 [2024-07-24 22:15:30.431072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.521 [2024-07-24 22:15:30.469181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.780 Running I/O for 1 seconds... 00:16:36.784 00:16:36.784 Latency(us) 00:16:36.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.784 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.784 Verification LBA range: start 0x0 length 0x400 00:16:36.784 Nvme0n1 : 1.01 2823.45 176.47 0.00 0.00 22375.76 2350.75 54480.36 00:16:36.784 =================================================================================================================== 00:16:36.784 Total : 2823.45 176.47 0.00 0.00 22375.76 2350.75 54480.36 00:16:36.784 22:15:31 -- target/host_management.sh@101 -- # stoptarget 00:16:36.784 22:15:31 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:36.784 22:15:31 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:36.784 22:15:31 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:36.784 22:15:31 -- target/host_management.sh@40 -- # nvmftestfini 00:16:36.784 22:15:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:36.784 22:15:31 -- nvmf/common.sh@116 -- # sync 00:16:36.784 22:15:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:36.784 22:15:31 -- nvmf/common.sh@119 -- # set +e 00:16:36.784 22:15:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:36.784 22:15:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:36.784 rmmod nvme_tcp 00:16:37.044 rmmod nvme_fabrics 00:16:37.044 rmmod nvme_keyring 00:16:37.044 22:15:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:37.044 22:15:31 -- nvmf/common.sh@123 -- # set -e 00:16:37.044 22:15:31 -- nvmf/common.sh@124 -- # return 0 00:16:37.044 22:15:31 -- nvmf/common.sh@477 -- # '[' -n 3528819 ']' 00:16:37.044 22:15:31 -- nvmf/common.sh@478 -- # killprocess 3528819 00:16:37.044 22:15:31 -- common/autotest_common.sh@926 -- # '[' -z 3528819 ']' 00:16:37.044 22:15:31 -- common/autotest_common.sh@930 -- # kill -0 3528819 00:16:37.044 22:15:31 -- common/autotest_common.sh@931 -- # uname 00:16:37.044 22:15:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:37.044 22:15:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3528819 00:16:37.044 22:15:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:37.044 22:15:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:37.044 22:15:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3528819' 00:16:37.044 killing process with pid 3528819 00:16:37.044 22:15:31 -- common/autotest_common.sh@945 -- # kill 3528819 00:16:37.044 22:15:31 -- common/autotest_common.sh@950 -- # wait 3528819 00:16:37.044 [2024-07-24 22:15:32.165438] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:37.303 22:15:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:37.303 22:15:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:37.303 22:15:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:37.304 22:15:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.304 22:15:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:37.304 22:15:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.304 22:15:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.304 22:15:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.212 22:15:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:39.212 00:16:39.213 real 0m6.855s 00:16:39.213 user 0m20.941s 00:16:39.213 sys 0m1.181s 00:16:39.213 22:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.213 22:15:34 -- common/autotest_common.sh@10 -- # set +x 00:16:39.213 ************************************ 00:16:39.213 END TEST nvmf_host_management 00:16:39.213 ************************************ 00:16:39.213 22:15:34 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:39.213 00:16:39.213 real 0m12.349s 00:16:39.213 user 0m22.371s 00:16:39.213 sys 0m5.220s 00:16:39.213 22:15:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.213 22:15:34 -- common/autotest_common.sh@10 -- # set +x 00:16:39.213 ************************************ 00:16:39.213 END TEST nvmf_host_management 00:16:39.213 ************************************ 00:16:39.213 22:15:34 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:39.213 22:15:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:39.213 22:15:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.213 22:15:34 -- common/autotest_common.sh@10 -- # set +x 00:16:39.213 ************************************ 00:16:39.213 START TEST nvmf_lvol 00:16:39.213 ************************************ 00:16:39.213 22:15:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:39.473 * Looking for test storage... 00:16:39.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.473 22:15:34 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.473 22:15:34 -- nvmf/common.sh@7 -- # uname -s 00:16:39.473 22:15:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.473 22:15:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.473 22:15:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.473 22:15:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.473 22:15:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.473 22:15:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.473 22:15:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.473 22:15:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.473 22:15:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.473 22:15:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.473 22:15:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.473 22:15:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.473 22:15:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.473 22:15:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.474 22:15:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.474 22:15:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.474 22:15:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.474 22:15:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.474 22:15:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.474 22:15:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.474 22:15:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.474 22:15:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.474 22:15:34 -- paths/export.sh@5 -- # export PATH 00:16:39.474 22:15:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.474 22:15:34 -- nvmf/common.sh@46 -- # : 0 00:16:39.474 22:15:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.474 22:15:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.474 22:15:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.474 22:15:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.474 22:15:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.474 22:15:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.474 22:15:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.474 22:15:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.474 22:15:34 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.474 22:15:34 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.474 22:15:34 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:39.474 22:15:34 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:39.474 22:15:34 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.474 22:15:34 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:39.474 22:15:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:39.474 22:15:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.474 22:15:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.474 22:15:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.474 22:15:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.474 22:15:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.474 22:15:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.474 22:15:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.474 22:15:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:39.474 22:15:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:39.474 22:15:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:39.474 22:15:34 -- common/autotest_common.sh@10 -- # set +x 00:16:46.048 22:15:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:46.048 22:15:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:46.048 22:15:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:46.048 22:15:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:46.048 22:15:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:46.048 22:15:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:46.048 22:15:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:46.048 22:15:39 -- nvmf/common.sh@294 -- # net_devs=() 00:16:46.048 22:15:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:46.048 22:15:39 -- nvmf/common.sh@295 -- # e810=() 00:16:46.048 22:15:39 -- nvmf/common.sh@295 -- # local -ga e810 00:16:46.048 22:15:39 -- nvmf/common.sh@296 -- # x722=() 00:16:46.048 22:15:39 -- nvmf/common.sh@296 -- # local -ga x722 00:16:46.048 22:15:39 -- nvmf/common.sh@297 -- # mlx=() 00:16:46.048 22:15:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:46.048 22:15:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.048 22:15:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:46.048 22:15:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:46.048 22:15:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:46.048 22:15:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:46.048 22:15:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:46.048 22:15:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:46.048 22:15:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:46.048 22:15:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:46.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:46.048 22:15:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:46.048 22:15:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:46.048 22:15:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.048 22:15:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.048 22:15:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:46.049 22:15:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:46.049 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:46.049 22:15:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:46.049 22:15:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:46.049 22:15:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.049 22:15:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:46.049 22:15:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.049 22:15:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:46.049 Found net devices under 0000:86:00.0: cvl_0_0 00:16:46.049 22:15:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.049 22:15:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:46.049 22:15:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.049 22:15:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:46.049 22:15:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.049 22:15:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:46.049 Found net devices under 0000:86:00.1: cvl_0_1 00:16:46.049 22:15:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.049 22:15:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:46.049 22:15:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:46.049 22:15:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:46.049 22:15:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:46.049 22:15:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.049 22:15:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.049 22:15:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.049 22:15:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:46.049 22:15:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.049 22:15:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.049 22:15:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:46.049 22:15:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.049 22:15:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.049 22:15:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:46.049 22:15:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:46.049 22:15:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.049 22:15:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.049 22:15:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.049 22:15:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.049 22:15:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:46.049 22:15:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.049 22:15:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.049 22:15:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.049 22:15:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:46.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:16:46.049 00:16:46.049 --- 10.0.0.2 ping statistics --- 00:16:46.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.049 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:46.049 22:15:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:16:46.049 00:16:46.049 --- 10.0.0.1 ping statistics --- 00:16:46.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.049 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:16:46.049 22:15:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.049 22:15:40 -- nvmf/common.sh@410 -- # return 0 00:16:46.049 22:15:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:46.049 22:15:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.049 22:15:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:46.049 22:15:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:46.049 22:15:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.049 22:15:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:46.049 22:15:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:46.049 22:15:40 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:46.049 22:15:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:46.049 22:15:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:46.049 22:15:40 -- common/autotest_common.sh@10 -- # set +x 00:16:46.049 22:15:40 -- nvmf/common.sh@469 -- # nvmfpid=3533214 00:16:46.049 22:15:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:46.049 22:15:40 -- nvmf/common.sh@470 -- # waitforlisten 3533214 00:16:46.049 22:15:40 -- common/autotest_common.sh@819 -- # '[' -z 3533214 ']' 00:16:46.049 22:15:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.049 22:15:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:46.049 22:15:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.049 22:15:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:46.049 22:15:40 -- common/autotest_common.sh@10 -- # set +x 00:16:46.049 [2024-07-24 22:15:40.297423] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:16:46.049 [2024-07-24 22:15:40.297469] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.049 EAL: No free 2048 kB hugepages reported on node 1 00:16:46.049 [2024-07-24 22:15:40.356365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:46.049 [2024-07-24 22:15:40.394628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:46.049 [2024-07-24 22:15:40.394744] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.049 [2024-07-24 22:15:40.394752] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.049 [2024-07-24 22:15:40.394760] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.049 [2024-07-24 22:15:40.394808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.049 [2024-07-24 22:15:40.394908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.049 [2024-07-24 22:15:40.394909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.049 22:15:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:46.049 22:15:41 -- common/autotest_common.sh@852 -- # return 0 00:16:46.049 22:15:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:46.049 22:15:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:46.049 22:15:41 -- common/autotest_common.sh@10 -- # set +x 00:16:46.049 22:15:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.049 22:15:41 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:46.309 [2024-07-24 22:15:41.287063] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.309 22:15:41 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.569 22:15:41 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:46.569 22:15:41 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:46.569 22:15:41 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:46.569 22:15:41 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:46.828 22:15:41 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:47.086 22:15:42 -- target/nvmf_lvol.sh@29 -- # lvs=858a3a3a-352b-4a25-89d6-6de296d0f081 00:16:47.086 22:15:42 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 858a3a3a-352b-4a25-89d6-6de296d0f081 lvol 20 00:16:47.344 22:15:42 -- target/nvmf_lvol.sh@32 -- # lvol=6fb27a8b-b653-4f97-825a-a39dd0fb78ce 00:16:47.344 22:15:42 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:47.344 22:15:42 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6fb27a8b-b653-4f97-825a-a39dd0fb78ce 00:16:47.603 22:15:42 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:47.603 [2024-07-24 22:15:42.736201] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:47.863 22:15:42 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:47.863 22:15:42 -- target/nvmf_lvol.sh@42 -- # perf_pid=3533653 00:16:47.863 22:15:42 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:47.863 22:15:42 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:47.863 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.246 22:15:43 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6fb27a8b-b653-4f97-825a-a39dd0fb78ce MY_SNAPSHOT 00:16:49.247 22:15:44 -- target/nvmf_lvol.sh@47 -- # snapshot=9179123b-897d-4b58-98db-f74fb0898c95 00:16:49.247 22:15:44 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6fb27a8b-b653-4f97-825a-a39dd0fb78ce 30 00:16:49.247 22:15:44 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9179123b-897d-4b58-98db-f74fb0898c95 MY_CLONE 00:16:49.506 22:15:44 -- target/nvmf_lvol.sh@49 -- # clone=1cc6df90-c386-4c99-8dd9-430da7a84c53 00:16:49.506 22:15:44 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1cc6df90-c386-4c99-8dd9-430da7a84c53 00:16:49.766 22:15:44 -- target/nvmf_lvol.sh@53 -- # wait 3533653 00:16:59.750 Initializing NVMe Controllers 00:16:59.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:59.750 Controller IO queue size 128, less than required. 00:16:59.750 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:59.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:59.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:59.750 Initialization complete. Launching workers. 00:16:59.750 ======================================================== 00:16:59.750 Latency(us) 00:16:59.750 Device Information : IOPS MiB/s Average min max 00:16:59.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12435.90 48.58 10298.85 893.66 57426.71 00:16:59.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11777.00 46.00 10872.09 3609.34 42238.82 00:16:59.750 ======================================================== 00:16:59.750 Total : 24212.90 94.58 10577.67 893.66 57426.71 00:16:59.750 00:16:59.750 22:15:53 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:59.750 22:15:53 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6fb27a8b-b653-4f97-825a-a39dd0fb78ce 00:16:59.750 22:15:53 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 858a3a3a-352b-4a25-89d6-6de296d0f081 00:16:59.750 22:15:53 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:59.750 22:15:53 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:59.750 22:15:53 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:59.750 22:15:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:59.750 22:15:53 -- nvmf/common.sh@116 -- # sync 00:16:59.750 22:15:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:59.750 22:15:53 -- nvmf/common.sh@119 -- # set +e 00:16:59.750 22:15:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:59.750 22:15:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:59.750 rmmod nvme_tcp 00:16:59.750 rmmod nvme_fabrics 00:16:59.750 rmmod nvme_keyring 00:16:59.750 22:15:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:59.750 22:15:53 -- nvmf/common.sh@123 -- # set -e 00:16:59.750 22:15:53 -- nvmf/common.sh@124 -- # return 0 00:16:59.750 22:15:53 -- nvmf/common.sh@477 -- # '[' -n 3533214 ']' 00:16:59.750 22:15:53 -- nvmf/common.sh@478 -- # killprocess 3533214 00:16:59.750 22:15:53 -- common/autotest_common.sh@926 -- # '[' -z 3533214 ']' 00:16:59.750 22:15:53 -- common/autotest_common.sh@930 -- # kill -0 3533214 00:16:59.750 22:15:53 -- common/autotest_common.sh@931 -- # uname 00:16:59.750 22:15:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:59.750 22:15:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3533214 00:16:59.750 22:15:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:59.750 22:15:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:59.750 22:15:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3533214' 00:16:59.750 killing process with pid 3533214 00:16:59.750 22:15:53 -- common/autotest_common.sh@945 -- # kill 3533214 00:16:59.750 22:15:53 -- common/autotest_common.sh@950 -- # wait 3533214 00:16:59.750 22:15:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:59.750 22:15:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:59.750 22:15:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:59.750 22:15:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:59.750 22:15:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:59.750 22:15:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.750 22:15:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.750 22:15:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.660 22:15:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:01.660 00:17:01.660 real 0m21.953s 00:17:01.660 user 1m3.976s 00:17:01.660 sys 0m7.085s 00:17:01.660 22:15:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:01.660 22:15:56 -- common/autotest_common.sh@10 -- # set +x 00:17:01.660 ************************************ 00:17:01.660 END TEST nvmf_lvol 00:17:01.660 ************************************ 00:17:01.660 22:15:56 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:01.660 22:15:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:01.660 22:15:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:01.660 22:15:56 -- common/autotest_common.sh@10 -- # set +x 00:17:01.660 ************************************ 00:17:01.660 START TEST nvmf_lvs_grow 00:17:01.660 ************************************ 00:17:01.660 22:15:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:01.660 * Looking for test storage... 00:17:01.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:01.660 22:15:56 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.660 22:15:56 -- nvmf/common.sh@7 -- # uname -s 00:17:01.660 22:15:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.660 22:15:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.660 22:15:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.660 22:15:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.660 22:15:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.660 22:15:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.660 22:15:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.660 22:15:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.660 22:15:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.660 22:15:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.660 22:15:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.660 22:15:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.660 22:15:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.660 22:15:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.660 22:15:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.660 22:15:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:01.660 22:15:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.660 22:15:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.660 22:15:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.660 22:15:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.660 22:15:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.660 22:15:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.660 22:15:56 -- paths/export.sh@5 -- # export PATH 00:17:01.660 22:15:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.660 22:15:56 -- nvmf/common.sh@46 -- # : 0 00:17:01.660 22:15:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:01.660 22:15:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:01.660 22:15:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:01.660 22:15:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.660 22:15:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.660 22:15:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:01.660 22:15:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:01.660 22:15:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:01.660 22:15:56 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:01.660 22:15:56 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.660 22:15:56 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:01.660 22:15:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:01.660 22:15:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.660 22:15:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:01.660 22:15:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:01.660 22:15:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:01.660 22:15:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.660 22:15:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.660 22:15:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.660 22:15:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:01.660 22:15:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:01.660 22:15:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:01.660 22:15:56 -- common/autotest_common.sh@10 -- # set +x 00:17:06.943 22:16:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:06.943 22:16:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:06.943 22:16:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:06.943 22:16:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:06.943 22:16:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:06.943 22:16:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:06.943 22:16:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:06.943 22:16:01 -- nvmf/common.sh@294 -- # net_devs=() 00:17:06.943 22:16:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:06.943 22:16:01 -- nvmf/common.sh@295 -- # e810=() 00:17:06.943 22:16:01 -- nvmf/common.sh@295 -- # local -ga e810 00:17:06.943 22:16:01 -- nvmf/common.sh@296 -- # x722=() 00:17:06.943 22:16:01 -- nvmf/common.sh@296 -- # local -ga x722 00:17:06.943 22:16:01 -- nvmf/common.sh@297 -- # mlx=() 00:17:06.943 22:16:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:06.943 22:16:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.943 22:16:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:06.943 22:16:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:06.943 22:16:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:06.943 22:16:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:06.943 22:16:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:06.943 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:06.943 22:16:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:06.943 22:16:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:06.943 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:06.943 22:16:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:06.943 22:16:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:06.943 22:16:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.943 22:16:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:06.943 22:16:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.943 22:16:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:06.943 Found net devices under 0000:86:00.0: cvl_0_0 00:17:06.943 22:16:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.943 22:16:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:06.943 22:16:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.943 22:16:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:06.943 22:16:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.943 22:16:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:06.943 Found net devices under 0000:86:00.1: cvl_0_1 00:17:06.943 22:16:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.943 22:16:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:06.943 22:16:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:06.943 22:16:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:06.943 22:16:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:06.943 22:16:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.943 22:16:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.943 22:16:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.943 22:16:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:06.943 22:16:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.944 22:16:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.944 22:16:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:06.944 22:16:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.944 22:16:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.944 22:16:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:06.944 22:16:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:06.944 22:16:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.944 22:16:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.944 22:16:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.944 22:16:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.944 22:16:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:06.944 22:16:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.944 22:16:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.944 22:16:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.944 22:16:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:06.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:17:06.944 00:17:06.944 --- 10.0.0.2 ping statistics --- 00:17:06.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.944 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:17:06.944 22:16:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.385 ms 00:17:06.944 00:17:06.944 --- 10.0.0.1 ping statistics --- 00:17:06.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.944 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:17:06.944 22:16:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.944 22:16:01 -- nvmf/common.sh@410 -- # return 0 00:17:06.944 22:16:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:06.944 22:16:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.944 22:16:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:06.944 22:16:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:06.944 22:16:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.944 22:16:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:06.944 22:16:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:06.944 22:16:01 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:06.944 22:16:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:06.944 22:16:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:06.944 22:16:01 -- common/autotest_common.sh@10 -- # set +x 00:17:06.944 22:16:02 -- nvmf/common.sh@469 -- # nvmfpid=3539059 00:17:06.944 22:16:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:06.944 22:16:02 -- nvmf/common.sh@470 -- # waitforlisten 3539059 00:17:06.944 22:16:02 -- common/autotest_common.sh@819 -- # '[' -z 3539059 ']' 00:17:06.944 22:16:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.944 22:16:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:06.944 22:16:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.944 22:16:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:06.944 22:16:02 -- common/autotest_common.sh@10 -- # set +x 00:17:06.944 [2024-07-24 22:16:02.049883] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:06.944 [2024-07-24 22:16:02.049928] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.944 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.204 [2024-07-24 22:16:02.106722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.204 [2024-07-24 22:16:02.145464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.204 [2024-07-24 22:16:02.145572] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.204 [2024-07-24 22:16:02.145580] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.204 [2024-07-24 22:16:02.145587] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.204 [2024-07-24 22:16:02.145609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.773 22:16:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:07.773 22:16:02 -- common/autotest_common.sh@852 -- # return 0 00:17:07.773 22:16:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:07.773 22:16:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:07.773 22:16:02 -- common/autotest_common.sh@10 -- # set +x 00:17:07.773 22:16:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.773 22:16:02 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:08.033 [2024-07-24 22:16:03.031658] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:08.033 22:16:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:08.033 22:16:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:08.033 22:16:03 -- common/autotest_common.sh@10 -- # set +x 00:17:08.033 ************************************ 00:17:08.033 START TEST lvs_grow_clean 00:17:08.033 ************************************ 00:17:08.033 22:16:03 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.033 22:16:03 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:08.292 22:16:03 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:08.292 22:16:03 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:08.292 22:16:03 -- target/nvmf_lvs_grow.sh@28 -- # lvs=419d2802-8d51-4183-ba6a-1e162d64d853 00:17:08.292 22:16:03 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:08.292 22:16:03 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:08.550 22:16:03 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:08.550 22:16:03 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:08.550 22:16:03 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 419d2802-8d51-4183-ba6a-1e162d64d853 lvol 150 00:17:08.809 22:16:03 -- target/nvmf_lvs_grow.sh@33 -- # lvol=d1c0879a-e39d-4397-9910-bc30d2efdb74 00:17:08.809 22:16:03 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:08.809 22:16:03 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:08.809 [2024-07-24 22:16:03.902509] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:08.809 [2024-07-24 22:16:03.902560] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:08.809 true 00:17:08.809 22:16:03 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:08.809 22:16:03 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:09.090 22:16:04 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:09.090 22:16:04 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:09.388 22:16:04 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d1c0879a-e39d-4397-9910-bc30d2efdb74 00:17:09.388 22:16:04 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:09.647 [2024-07-24 22:16:04.560505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.647 22:16:04 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:09.647 22:16:04 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3539567 00:17:09.647 22:16:04 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.647 22:16:04 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:09.647 22:16:04 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3539567 /var/tmp/bdevperf.sock 00:17:09.647 22:16:04 -- common/autotest_common.sh@819 -- # '[' -z 3539567 ']' 00:17:09.647 22:16:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.647 22:16:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.647 22:16:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.647 22:16:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.647 22:16:04 -- common/autotest_common.sh@10 -- # set +x 00:17:09.647 [2024-07-24 22:16:04.766161] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:09.647 [2024-07-24 22:16:04.766206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539567 ] 00:17:09.906 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.906 [2024-07-24 22:16:04.820302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.906 [2024-07-24 22:16:04.859998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.473 22:16:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.473 22:16:05 -- common/autotest_common.sh@852 -- # return 0 00:17:10.473 22:16:05 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:11.041 Nvme0n1 00:17:11.041 22:16:05 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:11.041 [ 00:17:11.041 { 00:17:11.041 "name": "Nvme0n1", 00:17:11.041 "aliases": [ 00:17:11.041 "d1c0879a-e39d-4397-9910-bc30d2efdb74" 00:17:11.041 ], 00:17:11.041 "product_name": "NVMe disk", 00:17:11.041 "block_size": 4096, 00:17:11.041 "num_blocks": 38912, 00:17:11.041 "uuid": "d1c0879a-e39d-4397-9910-bc30d2efdb74", 00:17:11.041 "assigned_rate_limits": { 00:17:11.041 "rw_ios_per_sec": 0, 00:17:11.041 "rw_mbytes_per_sec": 0, 00:17:11.041 "r_mbytes_per_sec": 0, 00:17:11.041 "w_mbytes_per_sec": 0 00:17:11.041 }, 00:17:11.041 "claimed": false, 00:17:11.041 "zoned": false, 00:17:11.041 "supported_io_types": { 00:17:11.041 "read": true, 00:17:11.041 "write": true, 00:17:11.041 "unmap": true, 00:17:11.041 "write_zeroes": true, 00:17:11.041 "flush": true, 00:17:11.041 "reset": true, 00:17:11.041 "compare": true, 00:17:11.041 "compare_and_write": true, 00:17:11.041 "abort": true, 00:17:11.041 "nvme_admin": true, 00:17:11.041 "nvme_io": true 00:17:11.041 }, 00:17:11.041 "driver_specific": { 00:17:11.041 "nvme": [ 00:17:11.041 { 00:17:11.041 "trid": { 00:17:11.041 "trtype": "TCP", 00:17:11.041 "adrfam": "IPv4", 00:17:11.041 "traddr": "10.0.0.2", 00:17:11.041 "trsvcid": "4420", 00:17:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:11.041 }, 00:17:11.041 "ctrlr_data": { 00:17:11.041 "cntlid": 1, 00:17:11.041 "vendor_id": "0x8086", 00:17:11.041 "model_number": "SPDK bdev Controller", 00:17:11.041 "serial_number": "SPDK0", 00:17:11.041 "firmware_revision": "24.01.1", 00:17:11.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:11.041 "oacs": { 00:17:11.041 "security": 0, 00:17:11.041 "format": 0, 00:17:11.041 "firmware": 0, 00:17:11.041 "ns_manage": 0 00:17:11.041 }, 00:17:11.041 "multi_ctrlr": true, 00:17:11.041 "ana_reporting": false 00:17:11.041 }, 00:17:11.041 "vs": { 00:17:11.041 "nvme_version": "1.3" 00:17:11.041 }, 00:17:11.041 "ns_data": { 00:17:11.041 "id": 1, 00:17:11.041 "can_share": true 00:17:11.041 } 00:17:11.041 } 00:17:11.041 ], 00:17:11.041 "mp_policy": "active_passive" 00:17:11.041 } 00:17:11.041 } 00:17:11.041 ] 00:17:11.041 22:16:06 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3539808 00:17:11.041 22:16:06 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:11.041 22:16:06 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:11.300 Running I/O for 10 seconds... 00:17:12.235 Latency(us) 00:17:12.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.235 Nvme0n1 : 1.00 21859.00 85.39 0.00 0.00 0.00 0.00 0.00 00:17:12.235 =================================================================================================================== 00:17:12.235 Total : 21859.00 85.39 0.00 0.00 0.00 0.00 0.00 00:17:12.235 00:17:13.171 22:16:08 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:13.171 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.171 Nvme0n1 : 2.00 22181.00 86.64 0.00 0.00 0.00 0.00 0.00 00:17:13.171 =================================================================================================================== 00:17:13.171 Total : 22181.00 86.64 0.00 0.00 0.00 0.00 0.00 00:17:13.171 00:17:13.430 true 00:17:13.430 22:16:08 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:13.430 22:16:08 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:13.430 22:16:08 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:13.430 22:16:08 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:13.430 22:16:08 -- target/nvmf_lvs_grow.sh@65 -- # wait 3539808 00:17:14.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.366 Nvme0n1 : 3.00 22463.00 87.75 0.00 0.00 0.00 0.00 0.00 00:17:14.366 =================================================================================================================== 00:17:14.366 Total : 22463.00 87.75 0.00 0.00 0.00 0.00 0.00 00:17:14.366 00:17:15.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.302 Nvme0n1 : 4.00 22494.25 87.87 0.00 0.00 0.00 0.00 0.00 00:17:15.302 =================================================================================================================== 00:17:15.302 Total : 22494.25 87.87 0.00 0.00 0.00 0.00 0.00 00:17:15.302 00:17:16.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.238 Nvme0n1 : 5.00 22557.00 88.11 0.00 0.00 0.00 0.00 0.00 00:17:16.238 =================================================================================================================== 00:17:16.238 Total : 22557.00 88.11 0.00 0.00 0.00 0.00 0.00 00:17:16.238 00:17:17.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.174 Nvme0n1 : 6.00 22582.17 88.21 0.00 0.00 0.00 0.00 0.00 00:17:17.174 =================================================================================================================== 00:17:17.174 Total : 22582.17 88.21 0.00 0.00 0.00 0.00 0.00 00:17:17.174 00:17:18.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.109 Nvme0n1 : 7.00 22663.86 88.53 0.00 0.00 0.00 0.00 0.00 00:17:18.109 =================================================================================================================== 00:17:18.109 Total : 22663.86 88.53 0.00 0.00 0.00 0.00 0.00 00:17:18.109 00:17:19.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.495 Nvme0n1 : 8.00 22664.00 88.53 0.00 0.00 0.00 0.00 0.00 00:17:19.495 =================================================================================================================== 00:17:19.495 Total : 22664.00 88.53 0.00 0.00 0.00 0.00 0.00 00:17:19.495 00:17:20.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.430 Nvme0n1 : 9.00 22728.33 88.78 0.00 0.00 0.00 0.00 0.00 00:17:20.430 =================================================================================================================== 00:17:20.430 Total : 22728.33 88.78 0.00 0.00 0.00 0.00 0.00 00:17:20.430 00:17:21.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.365 Nvme0n1 : 10.00 22756.30 88.89 0.00 0.00 0.00 0.00 0.00 00:17:21.365 =================================================================================================================== 00:17:21.365 Total : 22756.30 88.89 0.00 0.00 0.00 0.00 0.00 00:17:21.365 00:17:21.365 00:17:21.365 Latency(us) 00:17:21.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.365 Nvme0n1 : 10.01 22755.11 88.89 0.00 0.00 5621.23 2920.63 24960.67 00:17:21.365 =================================================================================================================== 00:17:21.365 Total : 22755.11 88.89 0.00 0.00 5621.23 2920.63 24960.67 00:17:21.365 0 00:17:21.365 22:16:16 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3539567 00:17:21.365 22:16:16 -- common/autotest_common.sh@926 -- # '[' -z 3539567 ']' 00:17:21.365 22:16:16 -- common/autotest_common.sh@930 -- # kill -0 3539567 00:17:21.365 22:16:16 -- common/autotest_common.sh@931 -- # uname 00:17:21.365 22:16:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.365 22:16:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3539567 00:17:21.365 22:16:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:21.365 22:16:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:21.365 22:16:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3539567' 00:17:21.365 killing process with pid 3539567 00:17:21.365 22:16:16 -- common/autotest_common.sh@945 -- # kill 3539567 00:17:21.365 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.365 00:17:21.365 Latency(us) 00:17:21.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.365 =================================================================================================================== 00:17:21.365 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:21.365 22:16:16 -- common/autotest_common.sh@950 -- # wait 3539567 00:17:21.365 22:16:16 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:21.624 22:16:16 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:21.624 22:16:16 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:21.881 22:16:16 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:21.881 22:16:16 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:21.881 22:16:16 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:22.139 [2024-07-24 22:16:17.017290] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:22.139 22:16:17 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:22.139 22:16:17 -- common/autotest_common.sh@640 -- # local es=0 00:17:22.139 22:16:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:22.139 22:16:17 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.139 22:16:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:22.139 22:16:17 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.139 22:16:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:22.140 22:16:17 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.140 22:16:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:22.140 22:16:17 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:22.140 22:16:17 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:22.140 22:16:17 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:22.140 request: 00:17:22.140 { 00:17:22.140 "uuid": "419d2802-8d51-4183-ba6a-1e162d64d853", 00:17:22.140 "method": "bdev_lvol_get_lvstores", 00:17:22.140 "req_id": 1 00:17:22.140 } 00:17:22.140 Got JSON-RPC error response 00:17:22.140 response: 00:17:22.140 { 00:17:22.140 "code": -19, 00:17:22.140 "message": "No such device" 00:17:22.140 } 00:17:22.140 22:16:17 -- common/autotest_common.sh@643 -- # es=1 00:17:22.140 22:16:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:22.140 22:16:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:22.140 22:16:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:22.140 22:16:17 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:22.398 aio_bdev 00:17:22.398 22:16:17 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev d1c0879a-e39d-4397-9910-bc30d2efdb74 00:17:22.398 22:16:17 -- common/autotest_common.sh@887 -- # local bdev_name=d1c0879a-e39d-4397-9910-bc30d2efdb74 00:17:22.398 22:16:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:22.398 22:16:17 -- common/autotest_common.sh@889 -- # local i 00:17:22.398 22:16:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:22.398 22:16:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:22.398 22:16:17 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:22.656 22:16:17 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d1c0879a-e39d-4397-9910-bc30d2efdb74 -t 2000 00:17:22.656 [ 00:17:22.656 { 00:17:22.656 "name": "d1c0879a-e39d-4397-9910-bc30d2efdb74", 00:17:22.656 "aliases": [ 00:17:22.656 "lvs/lvol" 00:17:22.656 ], 00:17:22.656 "product_name": "Logical Volume", 00:17:22.656 "block_size": 4096, 00:17:22.657 "num_blocks": 38912, 00:17:22.657 "uuid": "d1c0879a-e39d-4397-9910-bc30d2efdb74", 00:17:22.657 "assigned_rate_limits": { 00:17:22.657 "rw_ios_per_sec": 0, 00:17:22.657 "rw_mbytes_per_sec": 0, 00:17:22.657 "r_mbytes_per_sec": 0, 00:17:22.657 "w_mbytes_per_sec": 0 00:17:22.657 }, 00:17:22.657 "claimed": false, 00:17:22.657 "zoned": false, 00:17:22.657 "supported_io_types": { 00:17:22.657 "read": true, 00:17:22.657 "write": true, 00:17:22.657 "unmap": true, 00:17:22.657 "write_zeroes": true, 00:17:22.657 "flush": false, 00:17:22.657 "reset": true, 00:17:22.657 "compare": false, 00:17:22.657 "compare_and_write": false, 00:17:22.657 "abort": false, 00:17:22.657 "nvme_admin": false, 00:17:22.657 "nvme_io": false 00:17:22.657 }, 00:17:22.657 "driver_specific": { 00:17:22.657 "lvol": { 00:17:22.657 "lvol_store_uuid": "419d2802-8d51-4183-ba6a-1e162d64d853", 00:17:22.657 "base_bdev": "aio_bdev", 00:17:22.657 "thin_provision": false, 00:17:22.657 "snapshot": false, 00:17:22.657 "clone": false, 00:17:22.657 "esnap_clone": false 00:17:22.657 } 00:17:22.657 } 00:17:22.657 } 00:17:22.657 ] 00:17:22.657 22:16:17 -- common/autotest_common.sh@895 -- # return 0 00:17:22.657 22:16:17 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:22.657 22:16:17 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:22.915 22:16:17 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:22.915 22:16:17 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:22.915 22:16:17 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:23.174 22:16:18 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:23.174 22:16:18 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d1c0879a-e39d-4397-9910-bc30d2efdb74 00:17:23.174 22:16:18 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 419d2802-8d51-4183-ba6a-1e162d64d853 00:17:23.432 22:16:18 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:23.691 00:17:23.691 real 0m15.578s 00:17:23.691 user 0m15.236s 00:17:23.691 sys 0m1.422s 00:17:23.691 22:16:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.691 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:17:23.691 ************************************ 00:17:23.691 END TEST lvs_grow_clean 00:17:23.691 ************************************ 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:23.691 22:16:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:23.691 22:16:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:23.691 22:16:18 -- common/autotest_common.sh@10 -- # set +x 00:17:23.691 ************************************ 00:17:23.691 START TEST lvs_grow_dirty 00:17:23.691 ************************************ 00:17:23.691 22:16:18 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:23.691 22:16:18 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:23.950 22:16:18 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:23.950 22:16:18 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:23.950 22:16:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a08d3c90-8624-477e-94dd-d124f9904b93 00:17:23.950 22:16:19 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:23.950 22:16:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:24.209 22:16:19 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:24.209 22:16:19 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:24.209 22:16:19 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a08d3c90-8624-477e-94dd-d124f9904b93 lvol 150 00:17:24.468 22:16:19 -- target/nvmf_lvs_grow.sh@33 -- # lvol=21827844-e281-417f-87a8-4899314389f1 00:17:24.468 22:16:19 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:24.468 22:16:19 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:24.468 [2024-07-24 22:16:19.540380] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:24.468 [2024-07-24 22:16:19.540434] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:24.468 true 00:17:24.468 22:16:19 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:24.468 22:16:19 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:24.727 22:16:19 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:24.727 22:16:19 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:24.985 22:16:19 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 21827844-e281-417f-87a8-4899314389f1 00:17:24.985 22:16:20 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:25.243 22:16:20 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:25.502 22:16:20 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:25.502 22:16:20 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3542198 00:17:25.502 22:16:20 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:25.502 22:16:20 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3542198 /var/tmp/bdevperf.sock 00:17:25.502 22:16:20 -- common/autotest_common.sh@819 -- # '[' -z 3542198 ']' 00:17:25.502 22:16:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:25.502 22:16:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:25.502 22:16:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:25.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:25.502 22:16:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:25.502 22:16:20 -- common/autotest_common.sh@10 -- # set +x 00:17:25.502 [2024-07-24 22:16:20.415350] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:25.502 [2024-07-24 22:16:20.415397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3542198 ] 00:17:25.502 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.502 [2024-07-24 22:16:20.469233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.502 [2024-07-24 22:16:20.506451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.437 22:16:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:26.437 22:16:21 -- common/autotest_common.sh@852 -- # return 0 00:17:26.437 22:16:21 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:26.696 Nvme0n1 00:17:26.696 22:16:21 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:26.696 [ 00:17:26.696 { 00:17:26.696 "name": "Nvme0n1", 00:17:26.696 "aliases": [ 00:17:26.696 "21827844-e281-417f-87a8-4899314389f1" 00:17:26.696 ], 00:17:26.696 "product_name": "NVMe disk", 00:17:26.696 "block_size": 4096, 00:17:26.696 "num_blocks": 38912, 00:17:26.696 "uuid": "21827844-e281-417f-87a8-4899314389f1", 00:17:26.696 "assigned_rate_limits": { 00:17:26.696 "rw_ios_per_sec": 0, 00:17:26.696 "rw_mbytes_per_sec": 0, 00:17:26.696 "r_mbytes_per_sec": 0, 00:17:26.696 "w_mbytes_per_sec": 0 00:17:26.696 }, 00:17:26.696 "claimed": false, 00:17:26.696 "zoned": false, 00:17:26.696 "supported_io_types": { 00:17:26.696 "read": true, 00:17:26.696 "write": true, 00:17:26.696 "unmap": true, 00:17:26.696 "write_zeroes": true, 00:17:26.696 "flush": true, 00:17:26.696 "reset": true, 00:17:26.696 "compare": true, 00:17:26.696 "compare_and_write": true, 00:17:26.696 "abort": true, 00:17:26.696 "nvme_admin": true, 00:17:26.696 "nvme_io": true 00:17:26.696 }, 00:17:26.696 "driver_specific": { 00:17:26.696 "nvme": [ 00:17:26.696 { 00:17:26.696 "trid": { 00:17:26.696 "trtype": "TCP", 00:17:26.696 "adrfam": "IPv4", 00:17:26.696 "traddr": "10.0.0.2", 00:17:26.696 "trsvcid": "4420", 00:17:26.696 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:26.696 }, 00:17:26.696 "ctrlr_data": { 00:17:26.696 "cntlid": 1, 00:17:26.696 "vendor_id": "0x8086", 00:17:26.696 "model_number": "SPDK bdev Controller", 00:17:26.696 "serial_number": "SPDK0", 00:17:26.696 "firmware_revision": "24.01.1", 00:17:26.696 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:26.696 "oacs": { 00:17:26.696 "security": 0, 00:17:26.696 "format": 0, 00:17:26.696 "firmware": 0, 00:17:26.696 "ns_manage": 0 00:17:26.696 }, 00:17:26.696 "multi_ctrlr": true, 00:17:26.696 "ana_reporting": false 00:17:26.696 }, 00:17:26.696 "vs": { 00:17:26.696 "nvme_version": "1.3" 00:17:26.696 }, 00:17:26.696 "ns_data": { 00:17:26.696 "id": 1, 00:17:26.696 "can_share": true 00:17:26.696 } 00:17:26.696 } 00:17:26.696 ], 00:17:26.696 "mp_policy": "active_passive" 00:17:26.696 } 00:17:26.696 } 00:17:26.696 ] 00:17:26.696 22:16:21 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3542436 00:17:26.696 22:16:21 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:26.696 22:16:21 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:26.956 Running I/O for 10 seconds... 00:17:27.954 Latency(us) 00:17:27.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.954 Nvme0n1 : 1.00 22145.00 86.50 0.00 0.00 0.00 0.00 0.00 00:17:27.954 =================================================================================================================== 00:17:27.954 Total : 22145.00 86.50 0.00 0.00 0.00 0.00 0.00 00:17:27.954 00:17:28.887 22:16:23 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:28.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.888 Nvme0n1 : 2.00 22392.00 87.47 0.00 0.00 0.00 0.00 0.00 00:17:28.888 =================================================================================================================== 00:17:28.888 Total : 22392.00 87.47 0.00 0.00 0.00 0.00 0.00 00:17:28.888 00:17:28.888 true 00:17:28.888 22:16:23 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:28.888 22:16:23 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:29.146 22:16:24 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:29.146 22:16:24 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:29.146 22:16:24 -- target/nvmf_lvs_grow.sh@65 -- # wait 3542436 00:17:30.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.078 Nvme0n1 : 3.00 22486.00 87.84 0.00 0.00 0.00 0.00 0.00 00:17:30.079 =================================================================================================================== 00:17:30.079 Total : 22486.00 87.84 0.00 0.00 0.00 0.00 0.00 00:17:30.079 00:17:31.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.013 Nvme0n1 : 4.00 22572.25 88.17 0.00 0.00 0.00 0.00 0.00 00:17:31.013 =================================================================================================================== 00:17:31.013 Total : 22572.25 88.17 0.00 0.00 0.00 0.00 0.00 00:17:31.013 00:17:31.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.947 Nvme0n1 : 5.00 22626.60 88.39 0.00 0.00 0.00 0.00 0.00 00:17:31.947 =================================================================================================================== 00:17:31.947 Total : 22626.60 88.39 0.00 0.00 0.00 0.00 0.00 00:17:31.947 00:17:32.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.882 Nvme0n1 : 6.00 22696.50 88.66 0.00 0.00 0.00 0.00 0.00 00:17:32.882 =================================================================================================================== 00:17:32.882 Total : 22696.50 88.66 0.00 0.00 0.00 0.00 0.00 00:17:32.882 00:17:33.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.818 Nvme0n1 : 7.00 22828.00 89.17 0.00 0.00 0.00 0.00 0.00 00:17:33.818 =================================================================================================================== 00:17:33.818 Total : 22828.00 89.17 0.00 0.00 0.00 0.00 0.00 00:17:33.818 00:17:34.754 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.754 Nvme0n1 : 8.00 22859.25 89.29 0.00 0.00 0.00 0.00 0.00 00:17:34.754 =================================================================================================================== 00:17:34.754 Total : 22859.25 89.29 0.00 0.00 0.00 0.00 0.00 00:17:34.754 00:17:36.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.131 Nvme0n1 : 9.00 22934.44 89.59 0.00 0.00 0.00 0.00 0.00 00:17:36.131 =================================================================================================================== 00:17:36.131 Total : 22934.44 89.59 0.00 0.00 0.00 0.00 0.00 00:17:36.131 00:17:37.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.067 Nvme0n1 : 10.00 22950.50 89.65 0.00 0.00 0.00 0.00 0.00 00:17:37.067 =================================================================================================================== 00:17:37.067 Total : 22950.50 89.65 0.00 0.00 0.00 0.00 0.00 00:17:37.067 00:17:37.067 00:17:37.067 Latency(us) 00:17:37.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.067 Nvme0n1 : 10.01 22948.55 89.64 0.00 0.00 5573.85 3134.33 24732.72 00:17:37.067 =================================================================================================================== 00:17:37.067 Total : 22948.55 89.64 0.00 0.00 5573.85 3134.33 24732.72 00:17:37.067 0 00:17:37.067 22:16:31 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3542198 00:17:37.067 22:16:31 -- common/autotest_common.sh@926 -- # '[' -z 3542198 ']' 00:17:37.067 22:16:31 -- common/autotest_common.sh@930 -- # kill -0 3542198 00:17:37.067 22:16:31 -- common/autotest_common.sh@931 -- # uname 00:17:37.067 22:16:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:37.067 22:16:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3542198 00:17:37.067 22:16:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:37.067 22:16:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:37.067 22:16:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3542198' 00:17:37.067 killing process with pid 3542198 00:17:37.067 22:16:31 -- common/autotest_common.sh@945 -- # kill 3542198 00:17:37.067 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.067 00:17:37.067 Latency(us) 00:17:37.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.067 =================================================================================================================== 00:17:37.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.067 22:16:31 -- common/autotest_common.sh@950 -- # wait 3542198 00:17:37.068 22:16:32 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:37.327 22:16:32 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:37.327 22:16:32 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:37.586 22:16:32 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:37.586 22:16:32 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:37.586 22:16:32 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3539059 00:17:37.586 22:16:32 -- target/nvmf_lvs_grow.sh@74 -- # wait 3539059 00:17:37.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3539059 Killed "${NVMF_APP[@]}" "$@" 00:17:37.586 22:16:32 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:37.586 22:16:32 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:37.586 22:16:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:37.586 22:16:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:37.586 22:16:32 -- common/autotest_common.sh@10 -- # set +x 00:17:37.586 22:16:32 -- nvmf/common.sh@469 -- # nvmfpid=3544306 00:17:37.586 22:16:32 -- nvmf/common.sh@470 -- # waitforlisten 3544306 00:17:37.586 22:16:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:37.586 22:16:32 -- common/autotest_common.sh@819 -- # '[' -z 3544306 ']' 00:17:37.586 22:16:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.586 22:16:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:37.586 22:16:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.586 22:16:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:37.586 22:16:32 -- common/autotest_common.sh@10 -- # set +x 00:17:37.586 [2024-07-24 22:16:32.560992] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:37.586 [2024-07-24 22:16:32.561036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.586 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.586 [2024-07-24 22:16:32.617690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.586 [2024-07-24 22:16:32.656019] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:37.586 [2024-07-24 22:16:32.656135] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.586 [2024-07-24 22:16:32.656144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.586 [2024-07-24 22:16:32.656150] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.586 [2024-07-24 22:16:32.656167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.522 22:16:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.522 22:16:33 -- common/autotest_common.sh@852 -- # return 0 00:17:38.522 22:16:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:38.522 22:16:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:38.522 22:16:33 -- common/autotest_common.sh@10 -- # set +x 00:17:38.522 22:16:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.522 22:16:33 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:38.522 [2024-07-24 22:16:33.541185] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:38.522 [2024-07-24 22:16:33.541268] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:38.522 [2024-07-24 22:16:33.541293] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:38.522 22:16:33 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:38.522 22:16:33 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 21827844-e281-417f-87a8-4899314389f1 00:17:38.522 22:16:33 -- common/autotest_common.sh@887 -- # local bdev_name=21827844-e281-417f-87a8-4899314389f1 00:17:38.522 22:16:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:38.522 22:16:33 -- common/autotest_common.sh@889 -- # local i 00:17:38.522 22:16:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:38.522 22:16:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:38.522 22:16:33 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:38.780 22:16:33 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 21827844-e281-417f-87a8-4899314389f1 -t 2000 00:17:38.780 [ 00:17:38.780 { 00:17:38.780 "name": "21827844-e281-417f-87a8-4899314389f1", 00:17:38.780 "aliases": [ 00:17:38.780 "lvs/lvol" 00:17:38.780 ], 00:17:38.780 "product_name": "Logical Volume", 00:17:38.780 "block_size": 4096, 00:17:38.780 "num_blocks": 38912, 00:17:38.780 "uuid": "21827844-e281-417f-87a8-4899314389f1", 00:17:38.780 "assigned_rate_limits": { 00:17:38.780 "rw_ios_per_sec": 0, 00:17:38.780 "rw_mbytes_per_sec": 0, 00:17:38.780 "r_mbytes_per_sec": 0, 00:17:38.780 "w_mbytes_per_sec": 0 00:17:38.780 }, 00:17:38.780 "claimed": false, 00:17:38.780 "zoned": false, 00:17:38.780 "supported_io_types": { 00:17:38.780 "read": true, 00:17:38.780 "write": true, 00:17:38.780 "unmap": true, 00:17:38.780 "write_zeroes": true, 00:17:38.780 "flush": false, 00:17:38.780 "reset": true, 00:17:38.780 "compare": false, 00:17:38.780 "compare_and_write": false, 00:17:38.780 "abort": false, 00:17:38.780 "nvme_admin": false, 00:17:38.780 "nvme_io": false 00:17:38.780 }, 00:17:38.780 "driver_specific": { 00:17:38.780 "lvol": { 00:17:38.780 "lvol_store_uuid": "a08d3c90-8624-477e-94dd-d124f9904b93", 00:17:38.780 "base_bdev": "aio_bdev", 00:17:38.780 "thin_provision": false, 00:17:38.780 "snapshot": false, 00:17:38.780 "clone": false, 00:17:38.780 "esnap_clone": false 00:17:38.780 } 00:17:38.780 } 00:17:38.780 } 00:17:38.780 ] 00:17:38.780 22:16:33 -- common/autotest_common.sh@895 -- # return 0 00:17:38.780 22:16:33 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:38.780 22:16:33 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:39.038 22:16:34 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:39.038 22:16:34 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:39.038 22:16:34 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:39.297 22:16:34 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:39.297 22:16:34 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:39.297 [2024-07-24 22:16:34.369881] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:39.297 22:16:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:39.297 22:16:34 -- common/autotest_common.sh@640 -- # local es=0 00:17:39.297 22:16:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:39.297 22:16:34 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.297 22:16:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:39.297 22:16:34 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.297 22:16:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:39.297 22:16:34 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.297 22:16:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:39.297 22:16:34 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.297 22:16:34 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:39.297 22:16:34 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:39.555 request: 00:17:39.555 { 00:17:39.555 "uuid": "a08d3c90-8624-477e-94dd-d124f9904b93", 00:17:39.555 "method": "bdev_lvol_get_lvstores", 00:17:39.555 "req_id": 1 00:17:39.555 } 00:17:39.555 Got JSON-RPC error response 00:17:39.555 response: 00:17:39.555 { 00:17:39.555 "code": -19, 00:17:39.555 "message": "No such device" 00:17:39.555 } 00:17:39.555 22:16:34 -- common/autotest_common.sh@643 -- # es=1 00:17:39.555 22:16:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:39.555 22:16:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:39.555 22:16:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:39.555 22:16:34 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:39.813 aio_bdev 00:17:39.813 22:16:34 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 21827844-e281-417f-87a8-4899314389f1 00:17:39.813 22:16:34 -- common/autotest_common.sh@887 -- # local bdev_name=21827844-e281-417f-87a8-4899314389f1 00:17:39.813 22:16:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:39.813 22:16:34 -- common/autotest_common.sh@889 -- # local i 00:17:39.813 22:16:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:39.813 22:16:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:39.813 22:16:34 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:39.813 22:16:34 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 21827844-e281-417f-87a8-4899314389f1 -t 2000 00:17:40.072 [ 00:17:40.072 { 00:17:40.072 "name": "21827844-e281-417f-87a8-4899314389f1", 00:17:40.072 "aliases": [ 00:17:40.072 "lvs/lvol" 00:17:40.072 ], 00:17:40.072 "product_name": "Logical Volume", 00:17:40.072 "block_size": 4096, 00:17:40.072 "num_blocks": 38912, 00:17:40.072 "uuid": "21827844-e281-417f-87a8-4899314389f1", 00:17:40.072 "assigned_rate_limits": { 00:17:40.072 "rw_ios_per_sec": 0, 00:17:40.072 "rw_mbytes_per_sec": 0, 00:17:40.072 "r_mbytes_per_sec": 0, 00:17:40.072 "w_mbytes_per_sec": 0 00:17:40.072 }, 00:17:40.072 "claimed": false, 00:17:40.072 "zoned": false, 00:17:40.072 "supported_io_types": { 00:17:40.072 "read": true, 00:17:40.072 "write": true, 00:17:40.072 "unmap": true, 00:17:40.072 "write_zeroes": true, 00:17:40.072 "flush": false, 00:17:40.072 "reset": true, 00:17:40.072 "compare": false, 00:17:40.072 "compare_and_write": false, 00:17:40.072 "abort": false, 00:17:40.072 "nvme_admin": false, 00:17:40.072 "nvme_io": false 00:17:40.072 }, 00:17:40.072 "driver_specific": { 00:17:40.072 "lvol": { 00:17:40.072 "lvol_store_uuid": "a08d3c90-8624-477e-94dd-d124f9904b93", 00:17:40.072 "base_bdev": "aio_bdev", 00:17:40.072 "thin_provision": false, 00:17:40.072 "snapshot": false, 00:17:40.072 "clone": false, 00:17:40.072 "esnap_clone": false 00:17:40.072 } 00:17:40.072 } 00:17:40.072 } 00:17:40.072 ] 00:17:40.072 22:16:35 -- common/autotest_common.sh@895 -- # return 0 00:17:40.072 22:16:35 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:40.072 22:16:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:40.331 22:16:35 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:40.331 22:16:35 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:40.331 22:16:35 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:40.331 22:16:35 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:40.331 22:16:35 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 21827844-e281-417f-87a8-4899314389f1 00:17:40.590 22:16:35 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a08d3c90-8624-477e-94dd-d124f9904b93 00:17:40.849 22:16:35 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:40.849 22:16:35 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.849 00:17:40.849 real 0m17.264s 00:17:40.849 user 0m44.007s 00:17:40.849 sys 0m4.030s 00:17:40.849 22:16:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.849 22:16:35 -- common/autotest_common.sh@10 -- # set +x 00:17:40.849 ************************************ 00:17:40.849 END TEST lvs_grow_dirty 00:17:40.849 ************************************ 00:17:40.849 22:16:35 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:40.849 22:16:35 -- common/autotest_common.sh@796 -- # type=--id 00:17:40.849 22:16:35 -- common/autotest_common.sh@797 -- # id=0 00:17:40.849 22:16:35 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:40.849 22:16:35 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:40.849 22:16:35 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:40.849 22:16:35 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:40.849 22:16:35 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:40.849 22:16:35 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:41.108 nvmf_trace.0 00:17:41.108 22:16:36 -- common/autotest_common.sh@811 -- # return 0 00:17:41.108 22:16:36 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:41.108 22:16:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:41.108 22:16:36 -- nvmf/common.sh@116 -- # sync 00:17:41.108 22:16:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:41.108 22:16:36 -- nvmf/common.sh@119 -- # set +e 00:17:41.108 22:16:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:41.108 22:16:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:41.108 rmmod nvme_tcp 00:17:41.108 rmmod nvme_fabrics 00:17:41.108 rmmod nvme_keyring 00:17:41.108 22:16:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:41.108 22:16:36 -- nvmf/common.sh@123 -- # set -e 00:17:41.108 22:16:36 -- nvmf/common.sh@124 -- # return 0 00:17:41.108 22:16:36 -- nvmf/common.sh@477 -- # '[' -n 3544306 ']' 00:17:41.108 22:16:36 -- nvmf/common.sh@478 -- # killprocess 3544306 00:17:41.108 22:16:36 -- common/autotest_common.sh@926 -- # '[' -z 3544306 ']' 00:17:41.108 22:16:36 -- common/autotest_common.sh@930 -- # kill -0 3544306 00:17:41.108 22:16:36 -- common/autotest_common.sh@931 -- # uname 00:17:41.108 22:16:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.108 22:16:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3544306 00:17:41.108 22:16:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:41.108 22:16:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:41.108 22:16:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3544306' 00:17:41.108 killing process with pid 3544306 00:17:41.108 22:16:36 -- common/autotest_common.sh@945 -- # kill 3544306 00:17:41.108 22:16:36 -- common/autotest_common.sh@950 -- # wait 3544306 00:17:41.367 22:16:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:41.367 22:16:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:41.367 22:16:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:41.367 22:16:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.367 22:16:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:41.367 22:16:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.367 22:16:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.367 22:16:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.272 22:16:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:43.272 00:17:43.272 real 0m42.035s 00:17:43.272 user 1m4.882s 00:17:43.272 sys 0m10.050s 00:17:43.272 22:16:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:43.272 22:16:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.272 ************************************ 00:17:43.272 END TEST nvmf_lvs_grow 00:17:43.272 ************************************ 00:17:43.272 22:16:38 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:43.272 22:16:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:43.272 22:16:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:43.272 22:16:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.272 ************************************ 00:17:43.272 START TEST nvmf_bdev_io_wait 00:17:43.272 ************************************ 00:17:43.272 22:16:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:43.531 * Looking for test storage... 00:17:43.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:43.531 22:16:38 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:43.531 22:16:38 -- nvmf/common.sh@7 -- # uname -s 00:17:43.531 22:16:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.531 22:16:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.531 22:16:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.531 22:16:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.531 22:16:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.531 22:16:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.531 22:16:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.531 22:16:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.531 22:16:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.531 22:16:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.531 22:16:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.531 22:16:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.531 22:16:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.531 22:16:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.531 22:16:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:43.531 22:16:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:43.531 22:16:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.531 22:16:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.531 22:16:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.531 22:16:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.531 22:16:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.531 22:16:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.531 22:16:38 -- paths/export.sh@5 -- # export PATH 00:17:43.531 22:16:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.531 22:16:38 -- nvmf/common.sh@46 -- # : 0 00:17:43.531 22:16:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:43.531 22:16:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:43.531 22:16:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:43.531 22:16:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.531 22:16:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.531 22:16:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:43.531 22:16:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:43.531 22:16:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:43.531 22:16:38 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:43.531 22:16:38 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:43.531 22:16:38 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:43.531 22:16:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:43.531 22:16:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.531 22:16:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:43.531 22:16:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:43.531 22:16:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:43.531 22:16:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.531 22:16:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.531 22:16:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.531 22:16:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:43.531 22:16:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:43.531 22:16:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:43.531 22:16:38 -- common/autotest_common.sh@10 -- # set +x 00:17:48.804 22:16:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:48.804 22:16:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:48.804 22:16:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:48.804 22:16:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:48.804 22:16:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:48.804 22:16:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:48.804 22:16:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:48.804 22:16:43 -- nvmf/common.sh@294 -- # net_devs=() 00:17:48.804 22:16:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:48.804 22:16:43 -- nvmf/common.sh@295 -- # e810=() 00:17:48.804 22:16:43 -- nvmf/common.sh@295 -- # local -ga e810 00:17:48.804 22:16:43 -- nvmf/common.sh@296 -- # x722=() 00:17:48.804 22:16:43 -- nvmf/common.sh@296 -- # local -ga x722 00:17:48.804 22:16:43 -- nvmf/common.sh@297 -- # mlx=() 00:17:48.804 22:16:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:48.804 22:16:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.804 22:16:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:48.804 22:16:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:48.804 22:16:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:48.804 22:16:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:48.804 22:16:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:48.804 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:48.804 22:16:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:48.804 22:16:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:48.804 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:48.804 22:16:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:48.804 22:16:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:48.804 22:16:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.804 22:16:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:48.804 22:16:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.804 22:16:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:48.804 Found net devices under 0000:86:00.0: cvl_0_0 00:17:48.804 22:16:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.804 22:16:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:48.804 22:16:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.804 22:16:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:48.804 22:16:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.804 22:16:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:48.804 Found net devices under 0000:86:00.1: cvl_0_1 00:17:48.804 22:16:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.804 22:16:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:48.804 22:16:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:48.804 22:16:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:48.804 22:16:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:48.804 22:16:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.804 22:16:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.804 22:16:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.804 22:16:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:48.804 22:16:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.804 22:16:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.804 22:16:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:48.804 22:16:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.804 22:16:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.804 22:16:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:48.804 22:16:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:48.804 22:16:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.804 22:16:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.088 22:16:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.088 22:16:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.088 22:16:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:49.088 22:16:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.088 22:16:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.088 22:16:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.088 22:16:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:49.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:17:49.088 00:17:49.088 --- 10.0.0.2 ping statistics --- 00:17:49.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.088 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:17:49.088 22:16:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:17:49.088 00:17:49.088 --- 10.0.0.1 ping statistics --- 00:17:49.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.088 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:17:49.088 22:16:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.088 22:16:44 -- nvmf/common.sh@410 -- # return 0 00:17:49.088 22:16:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:49.088 22:16:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.088 22:16:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:49.088 22:16:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:49.088 22:16:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.088 22:16:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:49.088 22:16:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:49.088 22:16:44 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:49.088 22:16:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:49.088 22:16:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:49.088 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.088 22:16:44 -- nvmf/common.sh@469 -- # nvmfpid=3548380 00:17:49.088 22:16:44 -- nvmf/common.sh@470 -- # waitforlisten 3548380 00:17:49.088 22:16:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:49.088 22:16:44 -- common/autotest_common.sh@819 -- # '[' -z 3548380 ']' 00:17:49.088 22:16:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.088 22:16:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:49.088 22:16:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.088 22:16:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:49.088 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.088 [2024-07-24 22:16:44.163385] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:49.088 [2024-07-24 22:16:44.163426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.088 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.365 [2024-07-24 22:16:44.221960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:49.365 [2024-07-24 22:16:44.264903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:49.365 [2024-07-24 22:16:44.265018] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.365 [2024-07-24 22:16:44.265028] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.365 [2024-07-24 22:16:44.265036] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.365 [2024-07-24 22:16:44.265076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.365 [2024-07-24 22:16:44.265175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.365 [2024-07-24 22:16:44.265260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:49.365 [2024-07-24 22:16:44.265261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.365 22:16:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.365 22:16:44 -- common/autotest_common.sh@852 -- # return 0 00:17:49.365 22:16:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:49.365 22:16:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:49.365 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 22:16:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:49.365 22:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.365 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 22:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:49.365 22:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.365 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 22:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:49.365 22:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.365 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 [2024-07-24 22:16:44.409772] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:49.365 22:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:49.365 22:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.365 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 Malloc0 00:17:49.365 22:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.365 22:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.365 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 22:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.365 22:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.365 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 22:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.365 22:16:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:49.365 22:16:44 -- common/autotest_common.sh@10 -- # set +x 00:17:49.365 [2024-07-24 22:16:44.470908] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.365 22:16:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3548412 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@30 -- # READ_PID=3548414 00:17:49.365 22:16:44 -- nvmf/common.sh@520 -- # config=() 00:17:49.365 22:16:44 -- nvmf/common.sh@520 -- # local subsystem config 00:17:49.365 22:16:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:49.365 22:16:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:49.365 { 00:17:49.365 "params": { 00:17:49.365 "name": "Nvme$subsystem", 00:17:49.365 "trtype": "$TEST_TRANSPORT", 00:17:49.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.365 "adrfam": "ipv4", 00:17:49.365 "trsvcid": "$NVMF_PORT", 00:17:49.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.365 "hdgst": ${hdgst:-false}, 00:17:49.365 "ddgst": ${ddgst:-false} 00:17:49.365 }, 00:17:49.365 "method": "bdev_nvme_attach_controller" 00:17:49.365 } 00:17:49.365 EOF 00:17:49.365 )") 00:17:49.365 22:16:44 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:49.366 22:16:44 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:49.366 22:16:44 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3548416 00:17:49.366 22:16:44 -- nvmf/common.sh@520 -- # config=() 00:17:49.366 22:16:44 -- nvmf/common.sh@520 -- # local subsystem config 00:17:49.366 22:16:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:49.366 22:16:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:49.366 { 00:17:49.366 "params": { 00:17:49.366 "name": "Nvme$subsystem", 00:17:49.366 "trtype": "$TEST_TRANSPORT", 00:17:49.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.366 "adrfam": "ipv4", 00:17:49.366 "trsvcid": "$NVMF_PORT", 00:17:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.366 "hdgst": ${hdgst:-false}, 00:17:49.366 "ddgst": ${ddgst:-false} 00:17:49.366 }, 00:17:49.366 "method": "bdev_nvme_attach_controller" 00:17:49.366 } 00:17:49.366 EOF 00:17:49.366 )") 00:17:49.366 22:16:44 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:49.366 22:16:44 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:49.366 22:16:44 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3548419 00:17:49.366 22:16:44 -- nvmf/common.sh@542 -- # cat 00:17:49.366 22:16:44 -- target/bdev_io_wait.sh@35 -- # sync 00:17:49.366 22:16:44 -- nvmf/common.sh@520 -- # config=() 00:17:49.366 22:16:44 -- nvmf/common.sh@520 -- # local subsystem config 00:17:49.366 22:16:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:49.366 22:16:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:49.366 { 00:17:49.366 "params": { 00:17:49.366 "name": "Nvme$subsystem", 00:17:49.366 "trtype": "$TEST_TRANSPORT", 00:17:49.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.366 "adrfam": "ipv4", 00:17:49.366 "trsvcid": "$NVMF_PORT", 00:17:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.366 "hdgst": ${hdgst:-false}, 00:17:49.366 "ddgst": ${ddgst:-false} 00:17:49.366 }, 00:17:49.366 "method": "bdev_nvme_attach_controller" 00:17:49.366 } 00:17:49.366 EOF 00:17:49.366 )") 00:17:49.366 22:16:44 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:49.366 22:16:44 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:49.366 22:16:44 -- nvmf/common.sh@542 -- # cat 00:17:49.366 22:16:44 -- nvmf/common.sh@520 -- # config=() 00:17:49.366 22:16:44 -- nvmf/common.sh@520 -- # local subsystem config 00:17:49.366 22:16:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:49.366 22:16:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:49.366 { 00:17:49.366 "params": { 00:17:49.366 "name": "Nvme$subsystem", 00:17:49.366 "trtype": "$TEST_TRANSPORT", 00:17:49.366 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.366 "adrfam": "ipv4", 00:17:49.366 "trsvcid": "$NVMF_PORT", 00:17:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.366 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.366 "hdgst": ${hdgst:-false}, 00:17:49.366 "ddgst": ${ddgst:-false} 00:17:49.366 }, 00:17:49.366 "method": "bdev_nvme_attach_controller" 00:17:49.366 } 00:17:49.366 EOF 00:17:49.366 )") 00:17:49.366 22:16:44 -- nvmf/common.sh@542 -- # cat 00:17:49.366 22:16:44 -- target/bdev_io_wait.sh@37 -- # wait 3548412 00:17:49.366 22:16:44 -- nvmf/common.sh@542 -- # cat 00:17:49.366 22:16:44 -- nvmf/common.sh@544 -- # jq . 00:17:49.366 22:16:44 -- nvmf/common.sh@544 -- # jq . 00:17:49.366 22:16:44 -- nvmf/common.sh@544 -- # jq . 00:17:49.366 22:16:44 -- nvmf/common.sh@545 -- # IFS=, 00:17:49.366 22:16:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:49.366 "params": { 00:17:49.366 "name": "Nvme1", 00:17:49.366 "trtype": "tcp", 00:17:49.366 "traddr": "10.0.0.2", 00:17:49.366 "adrfam": "ipv4", 00:17:49.366 "trsvcid": "4420", 00:17:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.366 "hdgst": false, 00:17:49.366 "ddgst": false 00:17:49.366 }, 00:17:49.366 "method": "bdev_nvme_attach_controller" 00:17:49.366 }' 00:17:49.366 22:16:44 -- nvmf/common.sh@544 -- # jq . 00:17:49.366 22:16:44 -- nvmf/common.sh@545 -- # IFS=, 00:17:49.366 22:16:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:49.366 "params": { 00:17:49.366 "name": "Nvme1", 00:17:49.366 "trtype": "tcp", 00:17:49.366 "traddr": "10.0.0.2", 00:17:49.366 "adrfam": "ipv4", 00:17:49.366 "trsvcid": "4420", 00:17:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.366 "hdgst": false, 00:17:49.366 "ddgst": false 00:17:49.366 }, 00:17:49.366 "method": "bdev_nvme_attach_controller" 00:17:49.366 }' 00:17:49.366 22:16:44 -- nvmf/common.sh@545 -- # IFS=, 00:17:49.366 22:16:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:49.366 "params": { 00:17:49.366 "name": "Nvme1", 00:17:49.366 "trtype": "tcp", 00:17:49.366 "traddr": "10.0.0.2", 00:17:49.366 "adrfam": "ipv4", 00:17:49.366 "trsvcid": "4420", 00:17:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.366 "hdgst": false, 00:17:49.366 "ddgst": false 00:17:49.366 }, 00:17:49.366 "method": "bdev_nvme_attach_controller" 00:17:49.366 }' 00:17:49.366 22:16:44 -- nvmf/common.sh@545 -- # IFS=, 00:17:49.366 22:16:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:49.366 "params": { 00:17:49.366 "name": "Nvme1", 00:17:49.366 "trtype": "tcp", 00:17:49.366 "traddr": "10.0.0.2", 00:17:49.366 "adrfam": "ipv4", 00:17:49.366 "trsvcid": "4420", 00:17:49.366 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.366 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.366 "hdgst": false, 00:17:49.366 "ddgst": false 00:17:49.366 }, 00:17:49.366 "method": "bdev_nvme_attach_controller" 00:17:49.366 }' 00:17:49.625 [2024-07-24 22:16:44.515407] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:49.625 [2024-07-24 22:16:44.515453] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:49.625 [2024-07-24 22:16:44.518360] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:49.625 [2024-07-24 22:16:44.518398] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:49.625 [2024-07-24 22:16:44.521463] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:49.625 [2024-07-24 22:16:44.521500] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:49.625 [2024-07-24 22:16:44.524538] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:49.625 [2024-07-24 22:16:44.524584] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:49.625 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.625 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.625 [2024-07-24 22:16:44.687894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.625 [2024-07-24 22:16:44.713816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:49.625 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.883 [2024-07-24 22:16:44.787168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.883 [2024-07-24 22:16:44.812457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:49.883 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.883 [2024-07-24 22:16:44.884637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.883 [2024-07-24 22:16:44.913769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:49.883 [2024-07-24 22:16:44.925017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.883 [2024-07-24 22:16:44.950489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:50.141 Running I/O for 1 seconds... 00:17:50.141 Running I/O for 1 seconds... 00:17:50.399 Running I/O for 1 seconds... 00:17:50.399 Running I/O for 1 seconds... 00:17:50.966 00:17:50.966 Latency(us) 00:17:50.966 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.966 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:50.966 Nvme1n1 : 1.01 11350.56 44.34 0.00 0.00 11196.75 4131.62 33508.84 00:17:50.966 =================================================================================================================== 00:17:50.966 Total : 11350.56 44.34 0.00 0.00 11196.75 4131.62 33508.84 00:17:51.225 00:17:51.225 Latency(us) 00:17:51.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.225 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:51.225 Nvme1n1 : 1.00 251111.46 980.90 0.00 0.00 507.33 205.69 651.80 00:17:51.225 =================================================================================================================== 00:17:51.225 Total : 251111.46 980.90 0.00 0.00 507.33 205.69 651.80 00:17:51.225 22:16:46 -- target/bdev_io_wait.sh@38 -- # wait 3548414 00:17:51.225 00:17:51.225 Latency(us) 00:17:51.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.225 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:51.225 Nvme1n1 : 1.01 10926.91 42.68 0.00 0.00 11678.86 5242.88 22453.20 00:17:51.225 =================================================================================================================== 00:17:51.225 Total : 10926.91 42.68 0.00 0.00 11678.86 5242.88 22453.20 00:17:51.225 00:17:51.225 Latency(us) 00:17:51.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.225 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:51.225 Nvme1n1 : 1.01 13189.38 51.52 0.00 0.00 9678.94 4217.10 18692.01 00:17:51.225 =================================================================================================================== 00:17:51.225 Total : 13189.38 51.52 0.00 0.00 9678.94 4217.10 18692.01 00:17:51.484 22:16:46 -- target/bdev_io_wait.sh@39 -- # wait 3548416 00:17:51.484 22:16:46 -- target/bdev_io_wait.sh@40 -- # wait 3548419 00:17:51.484 22:16:46 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.484 22:16:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:51.484 22:16:46 -- common/autotest_common.sh@10 -- # set +x 00:17:51.484 22:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:51.484 22:16:46 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:51.484 22:16:46 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:51.484 22:16:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:51.484 22:16:46 -- nvmf/common.sh@116 -- # sync 00:17:51.484 22:16:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:51.484 22:16:46 -- nvmf/common.sh@119 -- # set +e 00:17:51.484 22:16:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:51.484 22:16:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:51.484 rmmod nvme_tcp 00:17:51.484 rmmod nvme_fabrics 00:17:51.484 rmmod nvme_keyring 00:17:51.484 22:16:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:51.743 22:16:46 -- nvmf/common.sh@123 -- # set -e 00:17:51.743 22:16:46 -- nvmf/common.sh@124 -- # return 0 00:17:51.743 22:16:46 -- nvmf/common.sh@477 -- # '[' -n 3548380 ']' 00:17:51.743 22:16:46 -- nvmf/common.sh@478 -- # killprocess 3548380 00:17:51.743 22:16:46 -- common/autotest_common.sh@926 -- # '[' -z 3548380 ']' 00:17:51.743 22:16:46 -- common/autotest_common.sh@930 -- # kill -0 3548380 00:17:51.743 22:16:46 -- common/autotest_common.sh@931 -- # uname 00:17:51.743 22:16:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:51.743 22:16:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3548380 00:17:51.743 22:16:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:51.743 22:16:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:51.743 22:16:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3548380' 00:17:51.743 killing process with pid 3548380 00:17:51.743 22:16:46 -- common/autotest_common.sh@945 -- # kill 3548380 00:17:51.743 22:16:46 -- common/autotest_common.sh@950 -- # wait 3548380 00:17:51.743 22:16:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:51.743 22:16:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:51.743 22:16:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:51.743 22:16:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.743 22:16:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:51.743 22:16:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.743 22:16:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.743 22:16:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.278 22:16:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:54.278 00:17:54.278 real 0m10.518s 00:17:54.278 user 0m16.997s 00:17:54.278 sys 0m5.901s 00:17:54.278 22:16:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:54.278 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:17:54.278 ************************************ 00:17:54.278 END TEST nvmf_bdev_io_wait 00:17:54.278 ************************************ 00:17:54.278 22:16:48 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:54.278 22:16:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:54.278 22:16:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:54.278 22:16:48 -- common/autotest_common.sh@10 -- # set +x 00:17:54.278 ************************************ 00:17:54.278 START TEST nvmf_queue_depth 00:17:54.278 ************************************ 00:17:54.278 22:16:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:54.278 * Looking for test storage... 00:17:54.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:54.278 22:16:49 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:54.278 22:16:49 -- nvmf/common.sh@7 -- # uname -s 00:17:54.278 22:16:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.278 22:16:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.278 22:16:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.278 22:16:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.278 22:16:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:54.278 22:16:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:54.278 22:16:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.278 22:16:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:54.278 22:16:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.278 22:16:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:54.278 22:16:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.278 22:16:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.278 22:16:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.278 22:16:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:54.278 22:16:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:54.278 22:16:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:54.278 22:16:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.278 22:16:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.278 22:16:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.279 22:16:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.279 22:16:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.279 22:16:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.279 22:16:49 -- paths/export.sh@5 -- # export PATH 00:17:54.279 22:16:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.279 22:16:49 -- nvmf/common.sh@46 -- # : 0 00:17:54.279 22:16:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:54.279 22:16:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:54.279 22:16:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:54.279 22:16:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.279 22:16:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.279 22:16:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:54.279 22:16:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:54.279 22:16:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:54.279 22:16:49 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:54.279 22:16:49 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:54.279 22:16:49 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.279 22:16:49 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:54.279 22:16:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:54.279 22:16:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.279 22:16:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:54.279 22:16:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:54.279 22:16:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:54.279 22:16:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:54.279 22:16:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:54.279 22:16:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:54.279 22:16:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:54.279 22:16:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:54.279 22:16:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:54.279 22:16:49 -- common/autotest_common.sh@10 -- # set +x 00:17:59.557 22:16:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:59.557 22:16:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:59.557 22:16:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:59.557 22:16:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:59.557 22:16:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:59.557 22:16:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:59.558 22:16:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:59.558 22:16:54 -- nvmf/common.sh@294 -- # net_devs=() 00:17:59.558 22:16:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:59.558 22:16:54 -- nvmf/common.sh@295 -- # e810=() 00:17:59.558 22:16:54 -- nvmf/common.sh@295 -- # local -ga e810 00:17:59.558 22:16:54 -- nvmf/common.sh@296 -- # x722=() 00:17:59.558 22:16:54 -- nvmf/common.sh@296 -- # local -ga x722 00:17:59.558 22:16:54 -- nvmf/common.sh@297 -- # mlx=() 00:17:59.558 22:16:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:59.558 22:16:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.558 22:16:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:59.558 22:16:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:59.558 22:16:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:59.558 22:16:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:59.558 22:16:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:59.558 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:59.558 22:16:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:59.558 22:16:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:59.558 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:59.558 22:16:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:59.558 22:16:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:59.558 22:16:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.558 22:16:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:59.558 22:16:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.558 22:16:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:59.558 Found net devices under 0000:86:00.0: cvl_0_0 00:17:59.558 22:16:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.558 22:16:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:59.558 22:16:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.558 22:16:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:59.558 22:16:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.558 22:16:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:59.558 Found net devices under 0000:86:00.1: cvl_0_1 00:17:59.558 22:16:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.558 22:16:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:59.558 22:16:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:59.558 22:16:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:59.558 22:16:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.558 22:16:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.558 22:16:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.558 22:16:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:59.558 22:16:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.558 22:16:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.558 22:16:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:59.558 22:16:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.558 22:16:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.558 22:16:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:59.558 22:16:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:59.558 22:16:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.558 22:16:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.558 22:16:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.558 22:16:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.558 22:16:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:59.558 22:16:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.558 22:16:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.558 22:16:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.558 22:16:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:59.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:17:59.558 00:17:59.558 --- 10.0.0.2 ping statistics --- 00:17:59.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.558 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:17:59.558 22:16:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:17:59.558 00:17:59.558 --- 10.0.0.1 ping statistics --- 00:17:59.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.558 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:17:59.558 22:16:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.558 22:16:54 -- nvmf/common.sh@410 -- # return 0 00:17:59.558 22:16:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:59.558 22:16:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.558 22:16:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:59.558 22:16:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.558 22:16:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:59.558 22:16:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:59.558 22:16:54 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:59.558 22:16:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:59.558 22:16:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:59.558 22:16:54 -- common/autotest_common.sh@10 -- # set +x 00:17:59.558 22:16:54 -- nvmf/common.sh@469 -- # nvmfpid=3552395 00:17:59.558 22:16:54 -- nvmf/common.sh@470 -- # waitforlisten 3552395 00:17:59.558 22:16:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.558 22:16:54 -- common/autotest_common.sh@819 -- # '[' -z 3552395 ']' 00:17:59.558 22:16:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.558 22:16:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:59.558 22:16:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.558 22:16:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:59.558 22:16:54 -- common/autotest_common.sh@10 -- # set +x 00:17:59.817 [2024-07-24 22:16:54.728225] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:17:59.817 [2024-07-24 22:16:54.728267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.817 EAL: No free 2048 kB hugepages reported on node 1 00:17:59.817 [2024-07-24 22:16:54.784878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.817 [2024-07-24 22:16:54.823250] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:59.817 [2024-07-24 22:16:54.823379] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.817 [2024-07-24 22:16:54.823387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.817 [2024-07-24 22:16:54.823394] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.817 [2024-07-24 22:16:54.823413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.752 22:16:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:00.752 22:16:55 -- common/autotest_common.sh@852 -- # return 0 00:18:00.752 22:16:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:00.752 22:16:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:00.752 22:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:00.752 22:16:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.752 22:16:55 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:00.752 22:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.752 22:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:00.752 [2024-07-24 22:16:55.560728] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.752 22:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.752 22:16:55 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:00.752 22:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.752 22:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:00.752 Malloc0 00:18:00.752 22:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.752 22:16:55 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:00.752 22:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.752 22:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:00.752 22:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.752 22:16:55 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.752 22:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.752 22:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:00.752 22:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.752 22:16:55 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:00.752 22:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:00.752 22:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:00.752 [2024-07-24 22:16:55.623771] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.752 22:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:00.752 22:16:55 -- target/queue_depth.sh@30 -- # bdevperf_pid=3552463 00:18:00.752 22:16:55 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:00.752 22:16:55 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.752 22:16:55 -- target/queue_depth.sh@33 -- # waitforlisten 3552463 /var/tmp/bdevperf.sock 00:18:00.752 22:16:55 -- common/autotest_common.sh@819 -- # '[' -z 3552463 ']' 00:18:00.752 22:16:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.752 22:16:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:00.752 22:16:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.752 22:16:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:00.752 22:16:55 -- common/autotest_common.sh@10 -- # set +x 00:18:00.752 [2024-07-24 22:16:55.670351] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:00.752 [2024-07-24 22:16:55.670392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3552463 ] 00:18:00.752 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.752 [2024-07-24 22:16:55.722698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.752 [2024-07-24 22:16:55.760139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.687 22:16:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:01.687 22:16:56 -- common/autotest_common.sh@852 -- # return 0 00:18:01.687 22:16:56 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:01.687 22:16:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.687 22:16:56 -- common/autotest_common.sh@10 -- # set +x 00:18:01.687 NVMe0n1 00:18:01.687 22:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.687 22:16:56 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.687 Running I/O for 10 seconds... 00:18:13.892 00:18:13.892 Latency(us) 00:18:13.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.892 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:13.892 Verification LBA range: start 0x0 length 0x4000 00:18:13.892 NVMe0n1 : 10.05 17791.47 69.50 0.00 0.00 57393.08 10941.66 58127.58 00:18:13.892 =================================================================================================================== 00:18:13.892 Total : 17791.47 69.50 0.00 0.00 57393.08 10941.66 58127.58 00:18:13.892 0 00:18:13.893 22:17:06 -- target/queue_depth.sh@39 -- # killprocess 3552463 00:18:13.893 22:17:06 -- common/autotest_common.sh@926 -- # '[' -z 3552463 ']' 00:18:13.893 22:17:06 -- common/autotest_common.sh@930 -- # kill -0 3552463 00:18:13.893 22:17:06 -- common/autotest_common.sh@931 -- # uname 00:18:13.893 22:17:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:13.893 22:17:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3552463 00:18:13.893 22:17:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:13.893 22:17:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:13.893 22:17:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3552463' 00:18:13.893 killing process with pid 3552463 00:18:13.893 22:17:06 -- common/autotest_common.sh@945 -- # kill 3552463 00:18:13.893 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.893 00:18:13.893 Latency(us) 00:18:13.893 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.893 =================================================================================================================== 00:18:13.893 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:13.893 22:17:06 -- common/autotest_common.sh@950 -- # wait 3552463 00:18:13.893 22:17:07 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:13.893 22:17:07 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:13.893 22:17:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:13.893 22:17:07 -- nvmf/common.sh@116 -- # sync 00:18:13.893 22:17:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:13.893 22:17:07 -- nvmf/common.sh@119 -- # set +e 00:18:13.893 22:17:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:13.893 22:17:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:13.893 rmmod nvme_tcp 00:18:13.893 rmmod nvme_fabrics 00:18:13.893 rmmod nvme_keyring 00:18:13.893 22:17:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:13.893 22:17:07 -- nvmf/common.sh@123 -- # set -e 00:18:13.893 22:17:07 -- nvmf/common.sh@124 -- # return 0 00:18:13.893 22:17:07 -- nvmf/common.sh@477 -- # '[' -n 3552395 ']' 00:18:13.893 22:17:07 -- nvmf/common.sh@478 -- # killprocess 3552395 00:18:13.893 22:17:07 -- common/autotest_common.sh@926 -- # '[' -z 3552395 ']' 00:18:13.893 22:17:07 -- common/autotest_common.sh@930 -- # kill -0 3552395 00:18:13.893 22:17:07 -- common/autotest_common.sh@931 -- # uname 00:18:13.893 22:17:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:13.893 22:17:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3552395 00:18:13.893 22:17:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:13.893 22:17:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:13.893 22:17:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3552395' 00:18:13.893 killing process with pid 3552395 00:18:13.893 22:17:07 -- common/autotest_common.sh@945 -- # kill 3552395 00:18:13.893 22:17:07 -- common/autotest_common.sh@950 -- # wait 3552395 00:18:13.893 22:17:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:13.893 22:17:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:13.893 22:17:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:13.893 22:17:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.893 22:17:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:13.893 22:17:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.893 22:17:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.893 22:17:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.460 22:17:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:14.460 00:18:14.460 real 0m20.504s 00:18:14.460 user 0m24.868s 00:18:14.460 sys 0m5.890s 00:18:14.460 22:17:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:14.460 22:17:09 -- common/autotest_common.sh@10 -- # set +x 00:18:14.460 ************************************ 00:18:14.460 END TEST nvmf_queue_depth 00:18:14.460 ************************************ 00:18:14.460 22:17:09 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:14.460 22:17:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:14.460 22:17:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:14.460 22:17:09 -- common/autotest_common.sh@10 -- # set +x 00:18:14.460 ************************************ 00:18:14.460 START TEST nvmf_multipath 00:18:14.460 ************************************ 00:18:14.460 22:17:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:14.460 * Looking for test storage... 00:18:14.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:14.460 22:17:09 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.460 22:17:09 -- nvmf/common.sh@7 -- # uname -s 00:18:14.460 22:17:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.460 22:17:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.460 22:17:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.460 22:17:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.460 22:17:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.460 22:17:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.460 22:17:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.460 22:17:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.460 22:17:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.460 22:17:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.718 22:17:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.718 22:17:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.718 22:17:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.718 22:17:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.718 22:17:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.718 22:17:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:14.718 22:17:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.718 22:17:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.718 22:17:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.718 22:17:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.718 22:17:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.718 22:17:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.718 22:17:09 -- paths/export.sh@5 -- # export PATH 00:18:14.718 22:17:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.718 22:17:09 -- nvmf/common.sh@46 -- # : 0 00:18:14.718 22:17:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:14.718 22:17:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:14.718 22:17:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:14.718 22:17:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.718 22:17:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.718 22:17:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:14.718 22:17:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:14.718 22:17:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:14.718 22:17:09 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:14.718 22:17:09 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:14.718 22:17:09 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:14.718 22:17:09 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:14.718 22:17:09 -- target/multipath.sh@43 -- # nvmftestinit 00:18:14.718 22:17:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:14.718 22:17:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.718 22:17:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:14.718 22:17:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:14.718 22:17:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:14.718 22:17:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.718 22:17:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.718 22:17:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.718 22:17:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:14.718 22:17:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:14.718 22:17:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:14.718 22:17:09 -- common/autotest_common.sh@10 -- # set +x 00:18:19.987 22:17:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:19.987 22:17:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:19.987 22:17:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:19.987 22:17:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:19.987 22:17:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:19.987 22:17:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:19.987 22:17:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:19.987 22:17:14 -- nvmf/common.sh@294 -- # net_devs=() 00:18:19.987 22:17:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:19.987 22:17:14 -- nvmf/common.sh@295 -- # e810=() 00:18:19.987 22:17:14 -- nvmf/common.sh@295 -- # local -ga e810 00:18:19.987 22:17:14 -- nvmf/common.sh@296 -- # x722=() 00:18:19.987 22:17:14 -- nvmf/common.sh@296 -- # local -ga x722 00:18:19.987 22:17:14 -- nvmf/common.sh@297 -- # mlx=() 00:18:19.987 22:17:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:19.987 22:17:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:19.987 22:17:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:19.987 22:17:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:19.987 22:17:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:19.987 22:17:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:19.987 22:17:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:19.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:19.987 22:17:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:19.987 22:17:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:19.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:19.987 22:17:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:19.987 22:17:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:19.987 22:17:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.987 22:17:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:19.987 22:17:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.987 22:17:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:19.987 Found net devices under 0000:86:00.0: cvl_0_0 00:18:19.987 22:17:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.987 22:17:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:19.987 22:17:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:19.987 22:17:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:19.987 22:17:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:19.987 22:17:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:19.987 Found net devices under 0000:86:00.1: cvl_0_1 00:18:19.987 22:17:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:19.987 22:17:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:19.987 22:17:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:19.987 22:17:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:19.987 22:17:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:19.987 22:17:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:19.987 22:17:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:19.987 22:17:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:19.987 22:17:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:19.987 22:17:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:19.987 22:17:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:19.987 22:17:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:19.987 22:17:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:19.987 22:17:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:19.987 22:17:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:19.987 22:17:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:19.987 22:17:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:19.987 22:17:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:19.987 22:17:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:19.987 22:17:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:19.987 22:17:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:19.987 22:17:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:19.987 22:17:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:19.987 22:17:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:19.987 22:17:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:19.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:19.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:18:19.987 00:18:19.987 --- 10.0.0.2 ping statistics --- 00:18:19.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.987 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:18:19.987 22:17:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:19.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:19.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:18:19.987 00:18:19.987 --- 10.0.0.1 ping statistics --- 00:18:19.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:19.987 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:18:19.987 22:17:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:19.987 22:17:15 -- nvmf/common.sh@410 -- # return 0 00:18:19.987 22:17:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:19.987 22:17:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:19.987 22:17:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:19.987 22:17:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:19.987 22:17:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:19.987 22:17:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:19.987 22:17:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:19.987 22:17:15 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:19.987 22:17:15 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:19.987 only one NIC for nvmf test 00:18:19.987 22:17:15 -- target/multipath.sh@47 -- # nvmftestfini 00:18:19.987 22:17:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:19.987 22:17:15 -- nvmf/common.sh@116 -- # sync 00:18:20.247 22:17:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:20.247 22:17:15 -- nvmf/common.sh@119 -- # set +e 00:18:20.247 22:17:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:20.247 22:17:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:20.247 rmmod nvme_tcp 00:18:20.247 rmmod nvme_fabrics 00:18:20.247 rmmod nvme_keyring 00:18:20.247 22:17:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:20.247 22:17:15 -- nvmf/common.sh@123 -- # set -e 00:18:20.247 22:17:15 -- nvmf/common.sh@124 -- # return 0 00:18:20.247 22:17:15 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:20.247 22:17:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:20.247 22:17:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:20.247 22:17:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:20.247 22:17:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:20.247 22:17:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:20.247 22:17:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.247 22:17:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.247 22:17:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.192 22:17:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:22.192 22:17:17 -- target/multipath.sh@48 -- # exit 0 00:18:22.192 22:17:17 -- target/multipath.sh@1 -- # nvmftestfini 00:18:22.192 22:17:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:22.192 22:17:17 -- nvmf/common.sh@116 -- # sync 00:18:22.192 22:17:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:22.192 22:17:17 -- nvmf/common.sh@119 -- # set +e 00:18:22.192 22:17:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:22.192 22:17:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:22.192 22:17:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:22.192 22:17:17 -- nvmf/common.sh@123 -- # set -e 00:18:22.192 22:17:17 -- nvmf/common.sh@124 -- # return 0 00:18:22.192 22:17:17 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:22.192 22:17:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:22.192 22:17:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:22.192 22:17:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:22.192 22:17:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.192 22:17:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:22.192 22:17:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.192 22:17:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.192 22:17:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.192 22:17:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:22.192 00:18:22.192 real 0m7.770s 00:18:22.192 user 0m1.584s 00:18:22.192 sys 0m4.180s 00:18:22.192 22:17:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.192 22:17:17 -- common/autotest_common.sh@10 -- # set +x 00:18:22.192 ************************************ 00:18:22.192 END TEST nvmf_multipath 00:18:22.192 ************************************ 00:18:22.192 22:17:17 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:22.192 22:17:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:22.192 22:17:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:22.192 22:17:17 -- common/autotest_common.sh@10 -- # set +x 00:18:22.192 ************************************ 00:18:22.192 START TEST nvmf_zcopy 00:18:22.192 ************************************ 00:18:22.192 22:17:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:22.451 * Looking for test storage... 00:18:22.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.451 22:17:17 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.451 22:17:17 -- nvmf/common.sh@7 -- # uname -s 00:18:22.451 22:17:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.451 22:17:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.452 22:17:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.452 22:17:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.452 22:17:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.452 22:17:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.452 22:17:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.452 22:17:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.452 22:17:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.452 22:17:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.452 22:17:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.452 22:17:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:22.452 22:17:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.452 22:17:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.452 22:17:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.452 22:17:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.452 22:17:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.452 22:17:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.452 22:17:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.452 22:17:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.452 22:17:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.452 22:17:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.452 22:17:17 -- paths/export.sh@5 -- # export PATH 00:18:22.452 22:17:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.452 22:17:17 -- nvmf/common.sh@46 -- # : 0 00:18:22.452 22:17:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:22.452 22:17:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:22.452 22:17:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:22.452 22:17:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.452 22:17:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.452 22:17:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:22.452 22:17:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:22.452 22:17:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:22.452 22:17:17 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:22.452 22:17:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:22.452 22:17:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.452 22:17:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:22.452 22:17:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:22.452 22:17:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:22.452 22:17:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.452 22:17:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.452 22:17:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.452 22:17:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:22.452 22:17:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:22.452 22:17:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:22.452 22:17:17 -- common/autotest_common.sh@10 -- # set +x 00:18:27.725 22:17:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:27.725 22:17:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:27.725 22:17:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:27.725 22:17:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:27.725 22:17:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:27.725 22:17:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:27.726 22:17:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:27.726 22:17:22 -- nvmf/common.sh@294 -- # net_devs=() 00:18:27.726 22:17:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:27.726 22:17:22 -- nvmf/common.sh@295 -- # e810=() 00:18:27.726 22:17:22 -- nvmf/common.sh@295 -- # local -ga e810 00:18:27.726 22:17:22 -- nvmf/common.sh@296 -- # x722=() 00:18:27.726 22:17:22 -- nvmf/common.sh@296 -- # local -ga x722 00:18:27.726 22:17:22 -- nvmf/common.sh@297 -- # mlx=() 00:18:27.726 22:17:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:27.726 22:17:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.726 22:17:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:27.726 22:17:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:27.726 22:17:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:27.726 22:17:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.726 22:17:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:27.726 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:27.726 22:17:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.726 22:17:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:27.726 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:27.726 22:17:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:27.726 22:17:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.726 22:17:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.726 22:17:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.726 22:17:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.726 22:17:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:27.726 Found net devices under 0000:86:00.0: cvl_0_0 00:18:27.726 22:17:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.726 22:17:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.726 22:17:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.726 22:17:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.726 22:17:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.726 22:17:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:27.726 Found net devices under 0000:86:00.1: cvl_0_1 00:18:27.726 22:17:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.726 22:17:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:27.726 22:17:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:27.726 22:17:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:27.726 22:17:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.726 22:17:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.726 22:17:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.726 22:17:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:27.726 22:17:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.726 22:17:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.726 22:17:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:27.726 22:17:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.726 22:17:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.726 22:17:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:27.726 22:17:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:27.726 22:17:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.726 22:17:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.726 22:17:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.726 22:17:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.726 22:17:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:27.726 22:17:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.726 22:17:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.726 22:17:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.726 22:17:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:27.726 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.726 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:18:27.726 00:18:27.726 --- 10.0.0.2 ping statistics --- 00:18:27.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.726 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:18:27.726 22:17:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.726 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.726 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:18:27.726 00:18:27.726 --- 10.0.0.1 ping statistics --- 00:18:27.726 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.726 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:18:27.726 22:17:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.726 22:17:22 -- nvmf/common.sh@410 -- # return 0 00:18:27.726 22:17:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:27.726 22:17:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.726 22:17:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:27.726 22:17:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.726 22:17:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:27.726 22:17:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:27.726 22:17:22 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:27.726 22:17:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:27.726 22:17:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:27.726 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:27.726 22:17:22 -- nvmf/common.sh@469 -- # nvmfpid=3561378 00:18:27.726 22:17:22 -- nvmf/common.sh@470 -- # waitforlisten 3561378 00:18:27.726 22:17:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:27.726 22:17:22 -- common/autotest_common.sh@819 -- # '[' -z 3561378 ']' 00:18:27.726 22:17:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:27.726 22:17:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:27.726 22:17:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:27.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:27.726 22:17:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:27.726 22:17:22 -- common/autotest_common.sh@10 -- # set +x 00:18:27.726 [2024-07-24 22:17:22.825470] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:27.726 [2024-07-24 22:17:22.825516] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.726 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.986 [2024-07-24 22:17:22.882982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.986 [2024-07-24 22:17:22.920708] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:27.986 [2024-07-24 22:17:22.920823] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.986 [2024-07-24 22:17:22.920832] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.986 [2024-07-24 22:17:22.920840] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.986 [2024-07-24 22:17:22.920856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.552 22:17:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:28.552 22:17:23 -- common/autotest_common.sh@852 -- # return 0 00:18:28.552 22:17:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:28.552 22:17:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:28.552 22:17:23 -- common/autotest_common.sh@10 -- # set +x 00:18:28.552 22:17:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.552 22:17:23 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:28.552 22:17:23 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:28.552 22:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.552 22:17:23 -- common/autotest_common.sh@10 -- # set +x 00:18:28.552 [2024-07-24 22:17:23.658070] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.552 22:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.552 22:17:23 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:28.552 22:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.552 22:17:23 -- common/autotest_common.sh@10 -- # set +x 00:18:28.552 22:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.552 22:17:23 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.552 22:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.552 22:17:23 -- common/autotest_common.sh@10 -- # set +x 00:18:28.552 [2024-07-24 22:17:23.674225] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.552 22:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.552 22:17:23 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:28.552 22:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.552 22:17:23 -- common/autotest_common.sh@10 -- # set +x 00:18:28.552 22:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.552 22:17:23 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:28.811 22:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.811 22:17:23 -- common/autotest_common.sh@10 -- # set +x 00:18:28.811 malloc0 00:18:28.811 22:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.811 22:17:23 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.811 22:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.811 22:17:23 -- common/autotest_common.sh@10 -- # set +x 00:18:28.811 22:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.811 22:17:23 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:28.811 22:17:23 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:28.811 22:17:23 -- nvmf/common.sh@520 -- # config=() 00:18:28.811 22:17:23 -- nvmf/common.sh@520 -- # local subsystem config 00:18:28.811 22:17:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:28.811 22:17:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:28.811 { 00:18:28.811 "params": { 00:18:28.811 "name": "Nvme$subsystem", 00:18:28.811 "trtype": "$TEST_TRANSPORT", 00:18:28.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:28.811 "adrfam": "ipv4", 00:18:28.811 "trsvcid": "$NVMF_PORT", 00:18:28.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:28.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:28.811 "hdgst": ${hdgst:-false}, 00:18:28.811 "ddgst": ${ddgst:-false} 00:18:28.811 }, 00:18:28.811 "method": "bdev_nvme_attach_controller" 00:18:28.811 } 00:18:28.811 EOF 00:18:28.811 )") 00:18:28.811 22:17:23 -- nvmf/common.sh@542 -- # cat 00:18:28.811 22:17:23 -- nvmf/common.sh@544 -- # jq . 00:18:28.811 22:17:23 -- nvmf/common.sh@545 -- # IFS=, 00:18:28.811 22:17:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:28.811 "params": { 00:18:28.811 "name": "Nvme1", 00:18:28.811 "trtype": "tcp", 00:18:28.811 "traddr": "10.0.0.2", 00:18:28.811 "adrfam": "ipv4", 00:18:28.811 "trsvcid": "4420", 00:18:28.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:28.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:28.811 "hdgst": false, 00:18:28.811 "ddgst": false 00:18:28.811 }, 00:18:28.811 "method": "bdev_nvme_attach_controller" 00:18:28.811 }' 00:18:28.811 [2024-07-24 22:17:23.748503] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:28.811 [2024-07-24 22:17:23.748545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3561419 ] 00:18:28.811 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.811 [2024-07-24 22:17:23.802565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.811 [2024-07-24 22:17:23.841511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.070 Running I/O for 10 seconds... 00:18:41.279 00:18:41.279 Latency(us) 00:18:41.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.279 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:41.279 Verification LBA range: start 0x0 length 0x1000 00:18:41.279 Nvme1n1 : 10.01 12678.72 99.05 0.00 0.00 10072.21 1602.78 56759.87 00:18:41.279 =================================================================================================================== 00:18:41.279 Total : 12678.72 99.05 0.00 0.00 10072.21 1602.78 56759.87 00:18:41.279 22:17:34 -- target/zcopy.sh@39 -- # perfpid=3563280 00:18:41.279 22:17:34 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:41.279 22:17:34 -- common/autotest_common.sh@10 -- # set +x 00:18:41.279 22:17:34 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:41.279 22:17:34 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:41.279 22:17:34 -- nvmf/common.sh@520 -- # config=() 00:18:41.279 22:17:34 -- nvmf/common.sh@520 -- # local subsystem config 00:18:41.279 [2024-07-24 22:17:34.365435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.365474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 22:17:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:41.279 22:17:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:41.279 { 00:18:41.279 "params": { 00:18:41.279 "name": "Nvme$subsystem", 00:18:41.279 "trtype": "$TEST_TRANSPORT", 00:18:41.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:41.279 "adrfam": "ipv4", 00:18:41.279 "trsvcid": "$NVMF_PORT", 00:18:41.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:41.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:41.279 "hdgst": ${hdgst:-false}, 00:18:41.279 "ddgst": ${ddgst:-false} 00:18:41.279 }, 00:18:41.279 "method": "bdev_nvme_attach_controller" 00:18:41.279 } 00:18:41.279 EOF 00:18:41.279 )") 00:18:41.279 22:17:34 -- nvmf/common.sh@542 -- # cat 00:18:41.279 22:17:34 -- nvmf/common.sh@544 -- # jq . 00:18:41.279 [2024-07-24 22:17:34.373418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.373432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 22:17:34 -- nvmf/common.sh@545 -- # IFS=, 00:18:41.279 22:17:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:41.279 "params": { 00:18:41.279 "name": "Nvme1", 00:18:41.279 "trtype": "tcp", 00:18:41.279 "traddr": "10.0.0.2", 00:18:41.279 "adrfam": "ipv4", 00:18:41.279 "trsvcid": "4420", 00:18:41.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.279 "hdgst": false, 00:18:41.279 "ddgst": false 00:18:41.279 }, 00:18:41.279 "method": "bdev_nvme_attach_controller" 00:18:41.279 }' 00:18:41.279 [2024-07-24 22:17:34.381436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.381447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 [2024-07-24 22:17:34.389456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.389466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 [2024-07-24 22:17:34.397478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.397487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 [2024-07-24 22:17:34.405445] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:41.279 [2024-07-24 22:17:34.405486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3563280 ] 00:18:41.279 [2024-07-24 22:17:34.405501] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.405512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 [2024-07-24 22:17:34.413520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.413530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 [2024-07-24 22:17:34.421542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.421551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.279 [2024-07-24 22:17:34.429565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.429574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 [2024-07-24 22:17:34.437588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.279 [2024-07-24 22:17:34.437597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.279 [2024-07-24 22:17:34.445610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.445620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.453632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.453641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.459512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.280 [2024-07-24 22:17:34.461654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.461664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.469678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.469689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.477699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.477709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.485724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.485745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.493741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.493751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.498548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.280 [2024-07-24 22:17:34.501763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.501773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.509793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.509816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.517813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.517830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.525832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.525843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.533852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.533864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.541872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.541884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.549893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.549904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.557920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.557931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.565941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.565952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.573979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.574000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.581990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.582004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.590011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.590024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.598033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.598051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.606059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.606072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.614083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.614097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.622098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.622108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.630118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.630129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.638139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.638149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.646164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.646173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.654188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.654202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.662206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.662216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.670228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.670237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.678249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.678259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.686272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.686282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.694293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.694314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.702328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.702341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.710347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.710356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.718366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.718375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.726388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.726397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.734409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.734419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.742431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.742441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.750453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.750463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.758481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.758498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.766497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.766507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 Running I/O for 5 seconds... 00:18:41.280 [2024-07-24 22:17:34.791145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.791165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.801199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.801218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.810150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.810169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.818714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.818733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.827866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.827884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.836266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.836285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.844999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.845018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.855509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.855527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.866679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.866697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.875607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.875625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.885563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.885581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.894456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.280 [2024-07-24 22:17:34.894474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.280 [2024-07-24 22:17:34.902771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.902789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.911178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.911196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.920313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.920332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.929110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.929128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.937841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.937860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.946189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.946207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.955217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.955235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.964085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.964104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.972638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.972656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.981408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.981426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.989614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.989632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:34.998235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:34.998256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.007217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.007236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.016146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.016164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.024880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.024899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.033650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.033670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.042635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.042655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.049176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.049194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.060354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.060375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.069304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.069324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.078161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.078180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.086605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.086624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.095502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.095521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.104074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.104092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.113001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.113021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.121387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.121406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.130354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.130372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.138885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.138904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.145603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.145621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.156181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.156200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.164298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.164320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.173424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.173443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.181964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.181983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.190827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.190845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.199309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.199328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.206213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.206231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.216942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.216962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.225971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.225990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.233769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.233789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.243826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.243845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.251276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.251295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.261366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.261385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.269685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.269703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.276584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.276603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.287290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.287319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.296229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.296248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.304937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.304955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.281 [2024-07-24 22:17:35.314049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.281 [2024-07-24 22:17:35.314067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.322782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.322800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.331904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.331926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.338877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.338895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.349037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.349062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.357959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.357978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.367134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.367152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.375684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.375703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.384377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.384396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.393634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.393653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.401480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.401499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.410168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.410187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.418586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.418604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.428368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.428387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.436259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.436277] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.445395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.445413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.454492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.454510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.462147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.462170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.470401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.470419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.480306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.480324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.487616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.487633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.497231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.497256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.506031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.506055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.515082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.515101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.522209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.522227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.531903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.531922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.540790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.540808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.549671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.549689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.557912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.557930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.567007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.567025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.575606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.575624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.584770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.584788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.593147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.593165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.601783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.601800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.610886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.610904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.619566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.619584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.626601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.626619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.635999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.636018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.644848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.644866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.653484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.653502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.662069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.662090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.671293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.671311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.679730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.679748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.688186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.688206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.697132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.697151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.705493] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.705513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.714659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.714677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.723258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.723276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.731624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.731642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.740367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.740386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.748937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.748955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.757899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.757918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.766060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.766079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.774794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.282 [2024-07-24 22:17:35.774813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.282 [2024-07-24 22:17:35.783463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.783481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.792529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.792547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.801305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.801322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.810835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.810854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.819369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.819387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.826292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.826310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.836500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.836518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.845249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.845267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.853587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.853606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.862754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.862771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.871441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.871459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.879787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.879805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.889033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.889057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.897197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.897215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.944589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.944606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.956433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.956452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.965230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.965248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.974378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.974397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.981299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.981317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:35.991041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:35.991064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.000293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.000311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.009404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.009422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.018336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.018355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.027057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.027091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.035804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.035823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.045123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.045142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.051808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.051826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.062702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.062723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.071627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.071645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.080514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.080532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.088746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.088765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.099093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.099112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.108764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.108783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.118149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.118167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.128454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.128474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.137387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.137405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.147277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.147305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.155537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.155555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.162480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.162497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.172530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.172548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.180943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.180962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.188032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.188057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.198195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.198214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.205772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.205790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.215959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.215978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.223229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.223247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.232384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.232402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.240075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.283 [2024-07-24 22:17:36.240094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.283 [2024-07-24 22:17:36.251576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.251593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.260841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.260860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.268373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.268392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.278587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.278605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.285206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.285224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.295183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.295201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.303755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.303773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.312702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.312721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.321171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.321188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.329404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.329422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.337780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.337798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.346781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.346800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.355763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.355782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.364553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.364575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.371186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.371204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.381997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.382016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.390417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.390436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.284 [2024-07-24 22:17:36.402165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.284 [2024-07-24 22:17:36.402184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.543 [2024-07-24 22:17:36.411775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.543 [2024-07-24 22:17:36.411796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.543 [2024-07-24 22:17:36.420862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.543 [2024-07-24 22:17:36.420883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.543 [2024-07-24 22:17:36.429190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.543 [2024-07-24 22:17:36.429210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.543 [2024-07-24 22:17:36.436817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.543 [2024-07-24 22:17:36.436836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.543 [2024-07-24 22:17:36.446371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.543 [2024-07-24 22:17:36.446391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.543 [2024-07-24 22:17:36.454947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.543 [2024-07-24 22:17:36.454966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.461769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.461788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.472657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.472676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.480933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.480951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.489747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.489765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.498430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.498448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.505471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.505488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.515061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.515080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.523635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.523653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.531972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.531993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.540658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.540677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.549503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.549522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.558063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.558097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.566761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.566780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.575547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.575566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.583884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.583902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.592608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.592626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.600961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.600979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.608093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.608111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.618124] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.618143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.626997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.627015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.635702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.635720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.644320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.644338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.650828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.650846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.661713] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.661732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.544 [2024-07-24 22:17:36.670773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.544 [2024-07-24 22:17:36.670792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.677780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.677801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.688642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.688663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.697620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.697643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.706263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.706281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.715359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.715378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.723820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.723839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.732487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.732506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.741101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.741120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.750144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.750163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.803 [2024-07-24 22:17:36.759025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.803 [2024-07-24 22:17:36.759050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.768248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.768266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.776862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.776881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.785838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.785856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.794993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.795011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.803394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.803412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.812733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.812752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.821467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.821486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.830241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.830260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.838504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.838523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.849506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.849524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.859211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.859229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.866910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.866932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.876774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.876792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.885119] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.885138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.892446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.892464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.902531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.902550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.911213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.911231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.920218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.920236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:41.804 [2024-07-24 22:17:36.932700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:41.804 [2024-07-24 22:17:36.932722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:36.942356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:36.942376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:36.950815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:36.950834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:36.957812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:36.957829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:36.967738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:36.967757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:36.976828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:36.976847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:36.985407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:36.985425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:36.993999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:36.994017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.001031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.001055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.011098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.011117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.020120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.020138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.030422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.030439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.040875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.040897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.047647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.047666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.058093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.058113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.066741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.066760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.075736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.075755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.084511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.084529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.092020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.092037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.101151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.101169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.109243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.109262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.119601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.119618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.128362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.128380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.139215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.139233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.148969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.148988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.157669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.157688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.166906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.166923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.063 [2024-07-24 22:17:37.175896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.063 [2024-07-24 22:17:37.175914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.064 [2024-07-24 22:17:37.185290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.064 [2024-07-24 22:17:37.185307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.064 [2024-07-24 22:17:37.194116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.064 [2024-07-24 22:17:37.194136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.202625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.202645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.211460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.211479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.221115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.221134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.229635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.229653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.238819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.238837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.248676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.248695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.257221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.257239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.264818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.264835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.276373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.276391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.287860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.287878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.296452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.296471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.305502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.305520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.314203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.314221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.323136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.323154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.331873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.331891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.340512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.340530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.349818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.349837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.358272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.358290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.367406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.367424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.374586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.374605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.385162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.385181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.394174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.394192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.402674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.402693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.411063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.411081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.419334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.419352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.428127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.428145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.437060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.437079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.445720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.445739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.323 [2024-07-24 22:17:37.454874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.323 [2024-07-24 22:17:37.454894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.463839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.463860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.472626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.472644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.480868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.480885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.489919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.489938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.498911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.498929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.507737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.507755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.516641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.516659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.525548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.525567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.533746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.533764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.542612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.542630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.551572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.551591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.559728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.559746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.568506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.568523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.577166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.577184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.583913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.583930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.594603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.594621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.602973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.602992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.611565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.611582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.620370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.620387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.628757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.628775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.637221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.637239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.645958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.645976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.654422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.654440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.663796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.663814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.671995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.672013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.680921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.680940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.689819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.689837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.698232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.698250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.706992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.707014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.584 [2024-07-24 22:17:37.715991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.584 [2024-07-24 22:17:37.716012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.724517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.724537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.733679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.733697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.742823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.742841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.749702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.749721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.760612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.760631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.769240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.769259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.778370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.778389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.787316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.787334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.796169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.796187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.804753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.804771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.813020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.813038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.821405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.821423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.830311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.830330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.839067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.839086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.847524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.847542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.855882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.855901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.864651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.864671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.873101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.873124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.881516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.881535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.890187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.890206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.899267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.899285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.908060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.908078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.917041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.917066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.925279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.925298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.934211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.934230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.943145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.943163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.951697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.951716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.960888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.960907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:42.844 [2024-07-24 22:17:37.969348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:42.844 [2024-07-24 22:17:37.969367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:37.977738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:37.977759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:37.986940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:37.986961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:37.995896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:37.995915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.004938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.004956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.013457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.013475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.022346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.022364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.029866] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.029884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.039051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.039094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.048142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.048163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.057171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.057189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.066437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.066456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.073513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.073531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.083516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.083534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.091911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.091929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.100161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.100180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.108954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.108974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.117805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.117823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.126140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.126158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.134607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.134625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.143471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.143489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.152373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.152392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.161394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.161412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.169712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.169731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.177700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.177719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.186716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.186734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.195174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.195192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.203743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.203766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.213099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.213118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.222425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.222444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.104 [2024-07-24 22:17:38.231397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.104 [2024-07-24 22:17:38.231415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.240403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.240423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.249135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.249155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.257738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.257757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.266199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.266227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.275176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.275195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.286801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.286819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.296224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.296242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.303564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.303581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.312954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.312973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.321724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.321743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.330641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.330660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.337518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.337536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.348005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.348024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.356298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.356316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.371103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.371121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.380723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.380745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.389755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.389774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.396457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.396475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.407088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.407106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.421006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.421025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.430735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.430753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.438617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.438636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.448636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.448654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.460346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.460364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.470701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.470719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.483298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.483316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.364 [2024-07-24 22:17:38.494565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.364 [2024-07-24 22:17:38.494584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.503058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.503078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.513017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.513036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.522191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.522210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.530692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.530711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.537947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.537965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.548192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.548210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.557258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.557276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.566186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.566204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.574610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.574628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.583100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.583118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.589792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.589809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.599989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.600007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.608368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.608386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.617687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.617706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.626353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.626371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.635364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.635382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.644277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.644295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.650911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.650928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.660957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.660975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.672898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.672916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.681264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.681283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.688995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.689013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.704507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.704527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.713821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.713840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.721221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.721239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.731958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.731976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.742077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.624 [2024-07-24 22:17:38.742095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.624 [2024-07-24 22:17:38.750762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.625 [2024-07-24 22:17:38.750780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.758032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.758060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.767889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.767908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.776700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.776719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.785069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.785087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.797562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.797581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.807936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.807954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.815941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.815960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.823624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.823642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.833765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.833783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.842739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.842757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.849933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.849951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.860969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.860988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.867981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.867999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.879040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.879064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.887758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.887776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.895941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.895959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.905472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.905490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.914324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.914341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.922758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.922776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.931736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.931755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.938645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.938664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.948931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.948950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.956802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.956820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.967825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.967843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.979087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.979106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.987940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.987958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:38.997577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:38.997595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.884 [2024-07-24 22:17:39.007015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.884 [2024-07-24 22:17:39.007033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.143 [2024-07-24 22:17:39.021348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.143 [2024-07-24 22:17:39.021368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.143 [2024-07-24 22:17:39.030373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.143 [2024-07-24 22:17:39.030392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.143 [2024-07-24 22:17:39.039065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.143 [2024-07-24 22:17:39.039100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.143 [2024-07-24 22:17:39.048229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.143 [2024-07-24 22:17:39.048247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.143 [2024-07-24 22:17:39.057219] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.143 [2024-07-24 22:17:39.057236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.143 [2024-07-24 22:17:39.066054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.143 [2024-07-24 22:17:39.066088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.143 [2024-07-24 22:17:39.074949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.143 [2024-07-24 22:17:39.074967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.143 [2024-07-24 22:17:39.083693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.143 [2024-07-24 22:17:39.083715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.143 [2024-07-24 22:17:39.092678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.092697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.101543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.101561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.110369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.110387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.119367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.119386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.128284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.128312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.137584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.137602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.146642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.146660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.160724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.160742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.169472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.169490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.177282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.177301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.185481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.185500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.194464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.194484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.208269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.208288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.215048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.215067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.225327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.225347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.234396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.234415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.243047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.243066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.257287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.257307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.263976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.263999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.144 [2024-07-24 22:17:39.274058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.144 [2024-07-24 22:17:39.274079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.282517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.282538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.290792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.290812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.299360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.299380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.308428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.308447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.315464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.315483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.326111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.326130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.334285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.334304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.342653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.342671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.351416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.351434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.359881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.359898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.368602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.368620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.377287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.377313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.385981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.385999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.394836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.394855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.403532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.403551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.411898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.411916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.420204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.420222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.430084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.430107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.438723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.438742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.447527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.447546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.456538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.456557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.404 [2024-07-24 22:17:39.465151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.404 [2024-07-24 22:17:39.465170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.405 [2024-07-24 22:17:39.473779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.405 [2024-07-24 22:17:39.473798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.405 [2024-07-24 22:17:39.482485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.405 [2024-07-24 22:17:39.482503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.405 [2024-07-24 22:17:39.491507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.405 [2024-07-24 22:17:39.491525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.405 [2024-07-24 22:17:39.500473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.405 [2024-07-24 22:17:39.500491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.405 [2024-07-24 22:17:39.508983] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.405 [2024-07-24 22:17:39.509002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.405 [2024-07-24 22:17:39.517975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.405 [2024-07-24 22:17:39.517994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.405 [2024-07-24 22:17:39.527368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.405 [2024-07-24 22:17:39.527387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.405 [2024-07-24 22:17:39.536612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.405 [2024-07-24 22:17:39.536633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.543239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.543259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.553317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.553336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.562860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.562880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.571478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.571500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.580422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.580440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.589447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.589466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.598283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.598306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.607634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.607653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.616344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.616363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.625095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.625113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.634166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.634184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.642545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.642564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.649932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.649950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.660846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.660865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.669331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.669349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.677881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.677899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.686961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.686979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.697786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.697804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.707426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.707444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.714270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.714288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.724558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.724576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.731608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.731626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.741574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.741592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.749959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.749977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.756792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.756810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.766739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.766761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.775139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.775158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 [2024-07-24 22:17:39.782132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.782150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.664 00:18:44.664 Latency(us) 00:18:44.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.664 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:44.664 Nvme1n1 : 5.01 16245.62 126.92 0.00 0.00 7871.46 1852.10 52428.80 00:18:44.664 =================================================================================================================== 00:18:44.664 Total : 16245.62 126.92 0.00 0.00 7871.46 1852.10 52428.80 00:18:44.664 [2024-07-24 22:17:39.789419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.664 [2024-07-24 22:17:39.789436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.665 [2024-07-24 22:17:39.797452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.665 [2024-07-24 22:17:39.797475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.805468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.805485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.813490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.813503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.821512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.821526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.829530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.829544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.837554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.837566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.845571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.845581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.853594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.853604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.861615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.861625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.869636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.869646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.877656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.877666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.885680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.885690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.893700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.893709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.901725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.901738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.909746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.909757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.917763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.917773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.925783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.925792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.933807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.933817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.941829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.941839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.949848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.949857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 [2024-07-24 22:17:39.957871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.924 [2024-07-24 22:17:39.957880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3563280) - No such process 00:18:44.924 22:17:39 -- target/zcopy.sh@49 -- # wait 3563280 00:18:44.924 22:17:39 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:44.924 22:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.924 22:17:39 -- common/autotest_common.sh@10 -- # set +x 00:18:44.924 22:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.924 22:17:39 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:44.924 22:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.924 22:17:39 -- common/autotest_common.sh@10 -- # set +x 00:18:44.924 delay0 00:18:44.924 22:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.924 22:17:39 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:44.924 22:17:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:44.924 22:17:39 -- common/autotest_common.sh@10 -- # set +x 00:18:44.924 22:17:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:44.924 22:17:39 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:44.924 EAL: No free 2048 kB hugepages reported on node 1 00:18:45.183 [2024-07-24 22:17:40.083343] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:51.747 Initializing NVMe Controllers 00:18:51.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:51.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:51.747 Initialization complete. Launching workers. 00:18:51.747 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 108 00:18:51.747 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 385, failed to submit 43 00:18:51.747 success 210, unsuccess 175, failed 0 00:18:51.747 22:17:46 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:51.747 22:17:46 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:51.747 22:17:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:51.747 22:17:46 -- nvmf/common.sh@116 -- # sync 00:18:51.747 22:17:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:51.747 22:17:46 -- nvmf/common.sh@119 -- # set +e 00:18:51.747 22:17:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:51.747 22:17:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:51.747 rmmod nvme_tcp 00:18:51.747 rmmod nvme_fabrics 00:18:51.747 rmmod nvme_keyring 00:18:51.747 22:17:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:51.747 22:17:46 -- nvmf/common.sh@123 -- # set -e 00:18:51.747 22:17:46 -- nvmf/common.sh@124 -- # return 0 00:18:51.747 22:17:46 -- nvmf/common.sh@477 -- # '[' -n 3561378 ']' 00:18:51.747 22:17:46 -- nvmf/common.sh@478 -- # killprocess 3561378 00:18:51.747 22:17:46 -- common/autotest_common.sh@926 -- # '[' -z 3561378 ']' 00:18:51.747 22:17:46 -- common/autotest_common.sh@930 -- # kill -0 3561378 00:18:51.747 22:17:46 -- common/autotest_common.sh@931 -- # uname 00:18:51.747 22:17:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:51.747 22:17:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3561378 00:18:51.747 22:17:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:51.747 22:17:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:51.747 22:17:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3561378' 00:18:51.747 killing process with pid 3561378 00:18:51.747 22:17:46 -- common/autotest_common.sh@945 -- # kill 3561378 00:18:51.747 22:17:46 -- common/autotest_common.sh@950 -- # wait 3561378 00:18:51.747 22:17:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:51.747 22:17:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:51.747 22:17:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:51.747 22:17:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.747 22:17:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:51.747 22:17:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.747 22:17:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:51.747 22:17:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.655 22:17:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:53.655 00:18:53.655 real 0m31.442s 00:18:53.655 user 0m43.235s 00:18:53.655 sys 0m10.119s 00:18:53.655 22:17:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.655 22:17:48 -- common/autotest_common.sh@10 -- # set +x 00:18:53.655 ************************************ 00:18:53.655 END TEST nvmf_zcopy 00:18:53.655 ************************************ 00:18:53.655 22:17:48 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:53.655 22:17:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:53.655 22:17:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:53.655 22:17:48 -- common/autotest_common.sh@10 -- # set +x 00:18:53.655 ************************************ 00:18:53.655 START TEST nvmf_nmic 00:18:53.655 ************************************ 00:18:53.655 22:17:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:53.975 * Looking for test storage... 00:18:53.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:53.975 22:17:48 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.975 22:17:48 -- nvmf/common.sh@7 -- # uname -s 00:18:53.975 22:17:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.975 22:17:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.975 22:17:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.975 22:17:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.975 22:17:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.975 22:17:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.975 22:17:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.975 22:17:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.975 22:17:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.975 22:17:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.975 22:17:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.975 22:17:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:53.975 22:17:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.975 22:17:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.975 22:17:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.975 22:17:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.975 22:17:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.975 22:17:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.975 22:17:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.975 22:17:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.975 22:17:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.975 22:17:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.975 22:17:48 -- paths/export.sh@5 -- # export PATH 00:18:53.975 22:17:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.975 22:17:48 -- nvmf/common.sh@46 -- # : 0 00:18:53.975 22:17:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:53.975 22:17:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:53.975 22:17:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:53.975 22:17:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.975 22:17:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.975 22:17:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:53.975 22:17:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:53.975 22:17:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:53.975 22:17:48 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:53.975 22:17:48 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:53.975 22:17:48 -- target/nmic.sh@14 -- # nvmftestinit 00:18:53.975 22:17:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:53.975 22:17:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.975 22:17:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:53.975 22:17:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:53.975 22:17:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:53.975 22:17:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.975 22:17:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:53.975 22:17:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.975 22:17:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:53.975 22:17:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:53.975 22:17:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:53.975 22:17:48 -- common/autotest_common.sh@10 -- # set +x 00:18:59.249 22:17:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:59.249 22:17:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:59.249 22:17:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:59.249 22:17:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:59.249 22:17:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:59.249 22:17:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:59.249 22:17:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:59.249 22:17:54 -- nvmf/common.sh@294 -- # net_devs=() 00:18:59.249 22:17:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:59.249 22:17:54 -- nvmf/common.sh@295 -- # e810=() 00:18:59.249 22:17:54 -- nvmf/common.sh@295 -- # local -ga e810 00:18:59.249 22:17:54 -- nvmf/common.sh@296 -- # x722=() 00:18:59.249 22:17:54 -- nvmf/common.sh@296 -- # local -ga x722 00:18:59.249 22:17:54 -- nvmf/common.sh@297 -- # mlx=() 00:18:59.249 22:17:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:59.249 22:17:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.249 22:17:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:59.249 22:17:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:59.249 22:17:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:59.249 22:17:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.249 22:17:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:59.249 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:59.249 22:17:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:59.249 22:17:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:59.249 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:59.249 22:17:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:59.249 22:17:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.249 22:17:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.249 22:17:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.249 22:17:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.249 22:17:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:59.249 Found net devices under 0000:86:00.0: cvl_0_0 00:18:59.249 22:17:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.249 22:17:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:59.249 22:17:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.249 22:17:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:59.249 22:17:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.249 22:17:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:59.249 Found net devices under 0000:86:00.1: cvl_0_1 00:18:59.249 22:17:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.249 22:17:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:59.249 22:17:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:59.249 22:17:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:59.249 22:17:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:59.249 22:17:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:59.249 22:17:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.249 22:17:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:59.249 22:17:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:59.249 22:17:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:59.249 22:17:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:59.249 22:17:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:59.249 22:17:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:59.249 22:17:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:59.249 22:17:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:59.249 22:17:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:59.249 22:17:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:59.249 22:17:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:59.249 22:17:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:59.249 22:17:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:59.249 22:17:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:59.249 22:17:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:59.249 22:17:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:59.249 22:17:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:59.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:59.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:18:59.249 00:18:59.249 --- 10.0.0.2 ping statistics --- 00:18:59.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.249 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:18:59.249 22:17:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:59.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:59.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:18:59.249 00:18:59.249 --- 10.0.0.1 ping statistics --- 00:18:59.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:59.249 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:18:59.249 22:17:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:59.249 22:17:54 -- nvmf/common.sh@410 -- # return 0 00:18:59.249 22:17:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:59.249 22:17:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:59.249 22:17:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:59.249 22:17:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:59.249 22:17:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:59.249 22:17:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:59.249 22:17:54 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:59.249 22:17:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:59.249 22:17:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:59.249 22:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:59.249 22:17:54 -- nvmf/common.sh@469 -- # nvmfpid=3568686 00:18:59.249 22:17:54 -- nvmf/common.sh@470 -- # waitforlisten 3568686 00:18:59.249 22:17:54 -- common/autotest_common.sh@819 -- # '[' -z 3568686 ']' 00:18:59.249 22:17:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.249 22:17:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:59.249 22:17:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.250 22:17:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:59.250 22:17:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:59.250 22:17:54 -- common/autotest_common.sh@10 -- # set +x 00:18:59.508 [2024-07-24 22:17:54.427149] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:18:59.508 [2024-07-24 22:17:54.427191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.508 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.508 [2024-07-24 22:17:54.485525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.508 [2024-07-24 22:17:54.526048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:59.508 [2024-07-24 22:17:54.526166] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.508 [2024-07-24 22:17:54.526174] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.508 [2024-07-24 22:17:54.526180] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.508 [2024-07-24 22:17:54.526226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.508 [2024-07-24 22:17:54.526324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.508 [2024-07-24 22:17:54.526412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.508 [2024-07-24 22:17:54.526414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.445 22:17:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:00.445 22:17:55 -- common/autotest_common.sh@852 -- # return 0 00:19:00.445 22:17:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:00.445 22:17:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 22:17:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.445 22:17:55 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:00.445 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 [2024-07-24 22:17:55.271502] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.445 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.445 22:17:55 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:00.445 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 Malloc0 00:19:00.445 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.445 22:17:55 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:00.445 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.445 22:17:55 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:00.445 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.445 22:17:55 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:00.445 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 [2024-07-24 22:17:55.323135] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.445 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.445 22:17:55 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:00.445 test case1: single bdev can't be used in multiple subsystems 00:19:00.445 22:17:55 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:00.445 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.445 22:17:55 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:00.445 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.445 22:17:55 -- target/nmic.sh@28 -- # nmic_status=0 00:19:00.445 22:17:55 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:00.445 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 [2024-07-24 22:17:55.347045] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:00.445 [2024-07-24 22:17:55.347065] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:00.445 [2024-07-24 22:17:55.347072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.445 request: 00:19:00.445 { 00:19:00.445 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:00.445 "namespace": { 00:19:00.445 "bdev_name": "Malloc0" 00:19:00.445 }, 00:19:00.445 "method": "nvmf_subsystem_add_ns", 00:19:00.445 "req_id": 1 00:19:00.445 } 00:19:00.445 Got JSON-RPC error response 00:19:00.445 response: 00:19:00.445 { 00:19:00.445 "code": -32602, 00:19:00.445 "message": "Invalid parameters" 00:19:00.445 } 00:19:00.445 22:17:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:00.445 22:17:55 -- target/nmic.sh@29 -- # nmic_status=1 00:19:00.445 22:17:55 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:00.445 22:17:55 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:00.445 Adding namespace failed - expected result. 00:19:00.445 22:17:55 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:00.445 test case2: host connect to nvmf target in multiple paths 00:19:00.445 22:17:55 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:00.445 22:17:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:00.445 22:17:55 -- common/autotest_common.sh@10 -- # set +x 00:19:00.445 [2024-07-24 22:17:55.359172] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:00.445 22:17:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:00.445 22:17:55 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:01.821 22:17:56 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:02.757 22:17:57 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:02.757 22:17:57 -- common/autotest_common.sh@1177 -- # local i=0 00:19:02.757 22:17:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.757 22:17:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:02.757 22:17:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:04.658 22:17:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:04.658 22:17:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:04.658 22:17:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:04.658 22:17:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:04.658 22:17:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.658 22:17:59 -- common/autotest_common.sh@1187 -- # return 0 00:19:04.658 22:17:59 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:04.659 [global] 00:19:04.659 thread=1 00:19:04.659 invalidate=1 00:19:04.659 rw=write 00:19:04.659 time_based=1 00:19:04.659 runtime=1 00:19:04.659 ioengine=libaio 00:19:04.659 direct=1 00:19:04.659 bs=4096 00:19:04.659 iodepth=1 00:19:04.659 norandommap=0 00:19:04.659 numjobs=1 00:19:04.659 00:19:04.659 verify_dump=1 00:19:04.659 verify_backlog=512 00:19:04.659 verify_state_save=0 00:19:04.659 do_verify=1 00:19:04.659 verify=crc32c-intel 00:19:04.659 [job0] 00:19:04.659 filename=/dev/nvme0n1 00:19:04.916 Could not set queue depth (nvme0n1) 00:19:04.916 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.916 fio-3.35 00:19:04.916 Starting 1 thread 00:19:06.291 00:19:06.291 job0: (groupid=0, jobs=1): err= 0: pid=3569787: Wed Jul 24 22:18:01 2024 00:19:06.291 read: IOPS=1105, BW=4424KiB/s (4530kB/s)(4428KiB/1001msec) 00:19:06.291 slat (nsec): min=6123, max=27123, avg=7075.18, stdev=990.14 00:19:06.291 clat (usec): min=309, max=1042, avg=466.11, stdev=61.05 00:19:06.291 lat (usec): min=316, max=1050, avg=473.18, stdev=61.10 00:19:06.291 clat percentiles (usec): 00:19:06.291 | 1.00th=[ 343], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 441], 00:19:06.291 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 461], 60.00th=[ 474], 00:19:06.291 | 70.00th=[ 498], 80.00th=[ 515], 90.00th=[ 529], 95.00th=[ 529], 00:19:06.291 | 99.00th=[ 676], 99.50th=[ 775], 99.90th=[ 832], 99.95th=[ 1045], 00:19:06.291 | 99.99th=[ 1045] 00:19:06.291 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:06.291 slat (nsec): min=8760, max=74316, avg=10371.90, stdev=3421.69 00:19:06.291 clat (usec): min=220, max=946, avg=296.00, stdev=98.07 00:19:06.291 lat (usec): min=229, max=1020, avg=306.37, stdev=100.11 00:19:06.291 clat percentiles (usec): 00:19:06.291 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 233], 00:19:06.291 | 30.00th=[ 239], 40.00th=[ 251], 50.00th=[ 269], 60.00th=[ 281], 00:19:06.291 | 70.00th=[ 297], 80.00th=[ 343], 90.00th=[ 392], 95.00th=[ 449], 00:19:06.291 | 99.00th=[ 717], 99.50th=[ 775], 99.90th=[ 832], 99.95th=[ 947], 00:19:06.291 | 99.99th=[ 947] 00:19:06.291 bw ( KiB/s): min= 5768, max= 5768, per=93.97%, avg=5768.00, stdev= 0.00, samples=1 00:19:06.291 iops : min= 1442, max= 1442, avg=1442.00, stdev= 0.00, samples=1 00:19:06.291 lat (usec) : 250=23.04%, 500=63.07%, 750=13.32%, 1000=0.53% 00:19:06.291 lat (msec) : 2=0.04% 00:19:06.291 cpu : usr=1.50%, sys=2.20%, ctx=2643, majf=0, minf=2 00:19:06.291 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.291 issued rwts: total=1107,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.291 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.291 00:19:06.291 Run status group 0 (all jobs): 00:19:06.291 READ: bw=4424KiB/s (4530kB/s), 4424KiB/s-4424KiB/s (4530kB/s-4530kB/s), io=4428KiB (4534kB), run=1001-1001msec 00:19:06.291 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:19:06.291 00:19:06.291 Disk stats (read/write): 00:19:06.291 nvme0n1: ios=1074/1281, merge=0/0, ticks=534/370, in_queue=904, util=92.28% 00:19:06.291 22:18:01 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:06.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:06.291 22:18:01 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:06.291 22:18:01 -- common/autotest_common.sh@1198 -- # local i=0 00:19:06.291 22:18:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:06.291 22:18:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.291 22:18:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:06.291 22:18:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.291 22:18:01 -- common/autotest_common.sh@1210 -- # return 0 00:19:06.291 22:18:01 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:06.291 22:18:01 -- target/nmic.sh@53 -- # nvmftestfini 00:19:06.291 22:18:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:06.291 22:18:01 -- nvmf/common.sh@116 -- # sync 00:19:06.291 22:18:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:06.291 22:18:01 -- nvmf/common.sh@119 -- # set +e 00:19:06.291 22:18:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:06.291 22:18:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:06.291 rmmod nvme_tcp 00:19:06.291 rmmod nvme_fabrics 00:19:06.291 rmmod nvme_keyring 00:19:06.291 22:18:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:06.291 22:18:01 -- nvmf/common.sh@123 -- # set -e 00:19:06.291 22:18:01 -- nvmf/common.sh@124 -- # return 0 00:19:06.291 22:18:01 -- nvmf/common.sh@477 -- # '[' -n 3568686 ']' 00:19:06.291 22:18:01 -- nvmf/common.sh@478 -- # killprocess 3568686 00:19:06.291 22:18:01 -- common/autotest_common.sh@926 -- # '[' -z 3568686 ']' 00:19:06.292 22:18:01 -- common/autotest_common.sh@930 -- # kill -0 3568686 00:19:06.292 22:18:01 -- common/autotest_common.sh@931 -- # uname 00:19:06.292 22:18:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:06.292 22:18:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3568686 00:19:06.550 22:18:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:06.550 22:18:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:06.550 22:18:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3568686' 00:19:06.550 killing process with pid 3568686 00:19:06.550 22:18:01 -- common/autotest_common.sh@945 -- # kill 3568686 00:19:06.550 22:18:01 -- common/autotest_common.sh@950 -- # wait 3568686 00:19:06.550 22:18:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:06.550 22:18:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:06.550 22:18:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:06.550 22:18:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:06.550 22:18:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:06.550 22:18:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.550 22:18:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:06.550 22:18:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.080 22:18:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:09.080 00:19:09.080 real 0m14.908s 00:19:09.080 user 0m35.024s 00:19:09.080 sys 0m4.869s 00:19:09.080 22:18:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.080 22:18:03 -- common/autotest_common.sh@10 -- # set +x 00:19:09.080 ************************************ 00:19:09.080 END TEST nvmf_nmic 00:19:09.080 ************************************ 00:19:09.080 22:18:03 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:09.080 22:18:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:09.080 22:18:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:09.080 22:18:03 -- common/autotest_common.sh@10 -- # set +x 00:19:09.080 ************************************ 00:19:09.080 START TEST nvmf_fio_target 00:19:09.080 ************************************ 00:19:09.080 22:18:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:09.080 * Looking for test storage... 00:19:09.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.080 22:18:03 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.080 22:18:03 -- nvmf/common.sh@7 -- # uname -s 00:19:09.080 22:18:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.080 22:18:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.080 22:18:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.080 22:18:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.080 22:18:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.080 22:18:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.080 22:18:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.080 22:18:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.080 22:18:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.080 22:18:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.080 22:18:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.080 22:18:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:09.080 22:18:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.080 22:18:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.080 22:18:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.080 22:18:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.080 22:18:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.080 22:18:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.080 22:18:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.080 22:18:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.080 22:18:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.081 22:18:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.081 22:18:03 -- paths/export.sh@5 -- # export PATH 00:19:09.081 22:18:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.081 22:18:03 -- nvmf/common.sh@46 -- # : 0 00:19:09.081 22:18:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:09.081 22:18:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:09.081 22:18:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:09.081 22:18:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.081 22:18:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.081 22:18:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:09.081 22:18:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:09.081 22:18:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:09.081 22:18:03 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:09.081 22:18:03 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:09.081 22:18:03 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.081 22:18:03 -- target/fio.sh@16 -- # nvmftestinit 00:19:09.081 22:18:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:09.081 22:18:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.081 22:18:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:09.081 22:18:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:09.081 22:18:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:09.081 22:18:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.081 22:18:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.081 22:18:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.081 22:18:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:09.081 22:18:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:09.081 22:18:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:09.081 22:18:03 -- common/autotest_common.sh@10 -- # set +x 00:19:14.348 22:18:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:14.348 22:18:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:14.348 22:18:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:14.348 22:18:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:14.348 22:18:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:14.348 22:18:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:14.348 22:18:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:14.348 22:18:08 -- nvmf/common.sh@294 -- # net_devs=() 00:19:14.348 22:18:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:14.348 22:18:08 -- nvmf/common.sh@295 -- # e810=() 00:19:14.348 22:18:08 -- nvmf/common.sh@295 -- # local -ga e810 00:19:14.348 22:18:08 -- nvmf/common.sh@296 -- # x722=() 00:19:14.348 22:18:08 -- nvmf/common.sh@296 -- # local -ga x722 00:19:14.348 22:18:08 -- nvmf/common.sh@297 -- # mlx=() 00:19:14.348 22:18:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:14.348 22:18:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.348 22:18:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:14.348 22:18:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:14.348 22:18:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:14.348 22:18:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:14.348 22:18:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:14.348 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:14.348 22:18:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:14.348 22:18:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:14.348 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:14.348 22:18:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:14.348 22:18:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:14.348 22:18:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.348 22:18:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:14.348 22:18:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.348 22:18:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:14.348 Found net devices under 0000:86:00.0: cvl_0_0 00:19:14.348 22:18:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.348 22:18:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:14.348 22:18:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.348 22:18:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:14.348 22:18:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.348 22:18:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:14.348 Found net devices under 0000:86:00.1: cvl_0_1 00:19:14.348 22:18:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.348 22:18:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:14.348 22:18:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:14.348 22:18:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:14.348 22:18:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:14.348 22:18:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:14.348 22:18:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:14.348 22:18:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.348 22:18:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:14.348 22:18:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:14.348 22:18:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:14.348 22:18:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:14.348 22:18:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:14.348 22:18:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:14.348 22:18:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:14.348 22:18:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:14.348 22:18:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:14.348 22:18:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.349 22:18:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.349 22:18:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.349 22:18:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:14.349 22:18:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.349 22:18:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.349 22:18:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.349 22:18:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:14.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:19:14.349 00:19:14.349 --- 10.0.0.2 ping statistics --- 00:19:14.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.349 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:19:14.349 22:18:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:19:14.349 00:19:14.349 --- 10.0.0.1 ping statistics --- 00:19:14.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.349 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:19:14.349 22:18:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.349 22:18:09 -- nvmf/common.sh@410 -- # return 0 00:19:14.349 22:18:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:14.349 22:18:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.349 22:18:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:14.349 22:18:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:14.349 22:18:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.349 22:18:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:14.349 22:18:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:14.349 22:18:09 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:14.349 22:18:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:14.349 22:18:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:14.349 22:18:09 -- common/autotest_common.sh@10 -- # set +x 00:19:14.349 22:18:09 -- nvmf/common.sh@469 -- # nvmfpid=3573946 00:19:14.349 22:18:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:14.349 22:18:09 -- nvmf/common.sh@470 -- # waitforlisten 3573946 00:19:14.349 22:18:09 -- common/autotest_common.sh@819 -- # '[' -z 3573946 ']' 00:19:14.349 22:18:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.349 22:18:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:14.349 22:18:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.349 22:18:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:14.349 22:18:09 -- common/autotest_common.sh@10 -- # set +x 00:19:14.349 [2024-07-24 22:18:09.299495] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:14.349 [2024-07-24 22:18:09.299538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.349 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.349 [2024-07-24 22:18:09.359070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.349 [2024-07-24 22:18:09.399454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:14.349 [2024-07-24 22:18:09.399563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.349 [2024-07-24 22:18:09.399571] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.349 [2024-07-24 22:18:09.399578] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.349 [2024-07-24 22:18:09.399618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:14.349 [2024-07-24 22:18:09.399716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.349 [2024-07-24 22:18:09.399777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:14.349 [2024-07-24 22:18:09.399779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.285 22:18:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:15.285 22:18:10 -- common/autotest_common.sh@852 -- # return 0 00:19:15.285 22:18:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:15.285 22:18:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:15.285 22:18:10 -- common/autotest_common.sh@10 -- # set +x 00:19:15.285 22:18:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.285 22:18:10 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:15.285 [2024-07-24 22:18:10.303090] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.285 22:18:10 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.545 22:18:10 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:15.545 22:18:10 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.804 22:18:10 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:15.804 22:18:10 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.804 22:18:10 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:15.804 22:18:10 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.062 22:18:11 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:16.062 22:18:11 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:16.321 22:18:11 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.580 22:18:11 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:16.580 22:18:11 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.580 22:18:11 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:16.580 22:18:11 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:16.839 22:18:11 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:16.839 22:18:11 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:17.098 22:18:12 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:17.098 22:18:12 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:17.098 22:18:12 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:17.356 22:18:12 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:17.356 22:18:12 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:17.616 22:18:12 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:17.616 [2024-07-24 22:18:12.705020] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.616 22:18:12 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:17.875 22:18:12 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:18.133 22:18:13 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:19.573 22:18:14 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:19.573 22:18:14 -- common/autotest_common.sh@1177 -- # local i=0 00:19:19.573 22:18:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:19.573 22:18:14 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:19.573 22:18:14 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:19.573 22:18:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:21.486 22:18:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:21.486 22:18:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:21.486 22:18:16 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:21.486 22:18:16 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:21.486 22:18:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.486 22:18:16 -- common/autotest_common.sh@1187 -- # return 0 00:19:21.486 22:18:16 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:21.486 [global] 00:19:21.486 thread=1 00:19:21.486 invalidate=1 00:19:21.486 rw=write 00:19:21.486 time_based=1 00:19:21.486 runtime=1 00:19:21.486 ioengine=libaio 00:19:21.486 direct=1 00:19:21.486 bs=4096 00:19:21.486 iodepth=1 00:19:21.486 norandommap=0 00:19:21.486 numjobs=1 00:19:21.486 00:19:21.486 verify_dump=1 00:19:21.486 verify_backlog=512 00:19:21.486 verify_state_save=0 00:19:21.486 do_verify=1 00:19:21.486 verify=crc32c-intel 00:19:21.486 [job0] 00:19:21.486 filename=/dev/nvme0n1 00:19:21.486 [job1] 00:19:21.486 filename=/dev/nvme0n2 00:19:21.486 [job2] 00:19:21.486 filename=/dev/nvme0n3 00:19:21.486 [job3] 00:19:21.486 filename=/dev/nvme0n4 00:19:21.486 Could not set queue depth (nvme0n1) 00:19:21.486 Could not set queue depth (nvme0n2) 00:19:21.486 Could not set queue depth (nvme0n3) 00:19:21.486 Could not set queue depth (nvme0n4) 00:19:21.744 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.744 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.744 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.744 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.744 fio-3.35 00:19:21.744 Starting 4 threads 00:19:23.115 00:19:23.115 job0: (groupid=0, jobs=1): err= 0: pid=3575438: Wed Jul 24 22:18:17 2024 00:19:23.115 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:23.115 slat (nsec): min=6694, max=38402, avg=9423.05, stdev=5294.73 00:19:23.115 clat (usec): min=315, max=43016, avg=638.70, stdev=1970.48 00:19:23.115 lat (usec): min=322, max=43036, avg=648.13, stdev=1971.07 00:19:23.115 clat percentiles (usec): 00:19:23.115 | 1.00th=[ 343], 5.00th=[ 367], 10.00th=[ 408], 20.00th=[ 465], 00:19:23.115 | 30.00th=[ 482], 40.00th=[ 498], 50.00th=[ 523], 60.00th=[ 553], 00:19:23.115 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 701], 95.00th=[ 758], 00:19:23.115 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[42730], 99.95th=[43254], 00:19:23.115 | 99.99th=[43254] 00:19:23.115 write: IOPS=1118, BW=4476KiB/s (4583kB/s)(4480KiB/1001msec); 0 zone resets 00:19:23.115 slat (usec): min=4, max=3351, avg=13.50, stdev=99.87 00:19:23.115 clat (usec): min=219, max=829, avg=281.44, stdev=101.46 00:19:23.115 lat (usec): min=229, max=3861, avg=294.94, stdev=147.59 00:19:23.115 clat percentiles (usec): 00:19:23.115 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 229], 00:19:23.115 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 235], 60.00th=[ 239], 00:19:23.115 | 70.00th=[ 269], 80.00th=[ 318], 90.00th=[ 424], 95.00th=[ 537], 00:19:23.115 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 824], 99.95th=[ 832], 00:19:23.115 | 99.99th=[ 832] 00:19:23.115 bw ( KiB/s): min= 4096, max= 4096, per=33.03%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.115 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.115 lat (usec) : 250=34.89%, 500=33.68%, 750=28.40%, 1000=2.85% 00:19:23.116 lat (msec) : 2=0.05%, 20=0.05%, 50=0.09% 00:19:23.116 cpu : usr=1.50%, sys=1.90%, ctx=2147, majf=0, minf=1 00:19:23.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.116 issued rwts: total=1024,1120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.116 job1: (groupid=0, jobs=1): err= 0: pid=3575439: Wed Jul 24 22:18:17 2024 00:19:23.116 read: IOPS=18, BW=75.3KiB/s (77.1kB/s)(76.0KiB/1009msec) 00:19:23.116 slat (nsec): min=8711, max=31892, avg=21913.21, stdev=4198.90 00:19:23.116 clat (usec): min=41297, max=42916, avg=41990.69, stdev=280.35 00:19:23.116 lat (usec): min=41305, max=42939, avg=42012.60, stdev=282.35 00:19:23.116 clat percentiles (usec): 00:19:23.116 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:19:23.116 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:23.116 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:19:23.116 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:23.116 | 99.99th=[42730] 00:19:23.116 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:19:23.116 slat (nsec): min=9041, max=36323, avg=10632.77, stdev=1674.47 00:19:23.116 clat (usec): min=227, max=1307, avg=398.71, stdev=125.93 00:19:23.116 lat (usec): min=237, max=1316, avg=409.35, stdev=126.24 00:19:23.116 clat percentiles (usec): 00:19:23.116 | 1.00th=[ 241], 5.00th=[ 281], 10.00th=[ 306], 20.00th=[ 334], 00:19:23.116 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 388], 00:19:23.116 | 70.00th=[ 404], 80.00th=[ 416], 90.00th=[ 510], 95.00th=[ 668], 00:19:23.116 | 99.00th=[ 1012], 99.50th=[ 1074], 99.90th=[ 1303], 99.95th=[ 1303], 00:19:23.116 | 99.99th=[ 1303] 00:19:23.116 bw ( KiB/s): min= 4096, max= 4096, per=33.03%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.116 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.116 lat (usec) : 250=2.26%, 500=84.18%, 750=7.34%, 1000=1.51% 00:19:23.116 lat (msec) : 2=1.13%, 50=3.58% 00:19:23.116 cpu : usr=0.00%, sys=0.79%, ctx=532, majf=0, minf=2 00:19:23.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.116 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.116 job2: (groupid=0, jobs=1): err= 0: pid=3575440: Wed Jul 24 22:18:17 2024 00:19:23.116 read: IOPS=616, BW=2465KiB/s (2524kB/s)(2480KiB/1006msec) 00:19:23.116 slat (nsec): min=5571, max=35140, avg=7609.89, stdev=2686.02 00:19:23.116 clat (usec): min=318, max=43013, avg=1143.28, stdev=5245.68 00:19:23.116 lat (usec): min=324, max=43035, avg=1150.89, stdev=5247.32 00:19:23.116 clat percentiles (usec): 00:19:23.116 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 379], 20.00th=[ 424], 00:19:23.116 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 469], 00:19:23.116 | 70.00th=[ 502], 80.00th=[ 537], 90.00th=[ 570], 95.00th=[ 652], 00:19:23.116 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:19:23.116 | 99.99th=[43254] 00:19:23.116 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:19:23.116 slat (nsec): min=6409, max=38022, avg=9022.70, stdev=2284.65 00:19:23.116 clat (usec): min=214, max=880, avg=272.60, stdev=90.34 00:19:23.116 lat (usec): min=222, max=890, avg=281.62, stdev=91.37 00:19:23.116 clat percentiles (usec): 00:19:23.116 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 229], 00:19:23.116 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 249], 00:19:23.116 | 70.00th=[ 262], 80.00th=[ 281], 90.00th=[ 347], 95.00th=[ 494], 00:19:23.116 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 799], 99.95th=[ 881], 00:19:23.116 | 99.99th=[ 881] 00:19:23.116 bw ( KiB/s): min= 4096, max= 4096, per=33.03%, avg=4096.00, stdev= 0.00, samples=2 00:19:23.116 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:19:23.116 lat (usec) : 250=38.08%, 500=47.32%, 750=13.14%, 1000=0.73% 00:19:23.116 lat (msec) : 2=0.12%, 50=0.61% 00:19:23.116 cpu : usr=1.09%, sys=0.90%, ctx=1646, majf=0, minf=1 00:19:23.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.116 issued rwts: total=620,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.116 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.116 job3: (groupid=0, jobs=1): err= 0: pid=3575441: Wed Jul 24 22:18:17 2024 00:19:23.116 read: IOPS=18, BW=74.4KiB/s (76.1kB/s)(76.0KiB/1022msec) 00:19:23.116 slat (nsec): min=10831, max=22932, avg=20953.79, stdev=2859.15 00:19:23.116 clat (usec): min=41802, max=42993, avg=42078.08, stdev=321.73 00:19:23.116 lat (usec): min=41823, max=43015, avg=42099.03, stdev=321.86 00:19:23.116 clat percentiles (usec): 00:19:23.116 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:19:23.116 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:23.116 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[43254], 00:19:23.116 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:23.116 | 99.99th=[43254] 00:19:23.116 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:19:23.116 slat (nsec): min=4643, max=36239, avg=9178.20, stdev=2496.72 00:19:23.116 clat (usec): min=228, max=1336, avg=422.53, stdev=128.50 00:19:23.116 lat (usec): min=238, max=1346, avg=431.70, stdev=128.94 00:19:23.116 clat percentiles (usec): 00:19:23.116 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 265], 20.00th=[ 347], 00:19:23.116 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 420], 00:19:23.116 | 70.00th=[ 449], 80.00th=[ 486], 90.00th=[ 545], 95.00th=[ 660], 00:19:23.116 | 99.00th=[ 807], 99.50th=[ 1074], 99.90th=[ 1336], 99.95th=[ 1336], 00:19:23.116 | 99.99th=[ 1336] 00:19:23.116 bw ( KiB/s): min= 4096, max= 4096, per=33.03%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.116 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.116 lat (usec) : 250=9.42%, 500=72.13%, 750=11.49%, 1000=2.64% 00:19:23.116 lat (msec) : 2=0.75%, 50=3.58% 00:19:23.116 cpu : usr=0.39%, sys=0.29%, ctx=531, majf=0, minf=1 00:19:23.116 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.117 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.117 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.117 00:19:23.117 Run status group 0 (all jobs): 00:19:23.117 READ: bw=6583KiB/s (6741kB/s), 74.4KiB/s-4092KiB/s (76.1kB/s-4190kB/s), io=6728KiB (6889kB), run=1001-1022msec 00:19:23.117 WRITE: bw=12.1MiB/s (12.7MB/s), 2004KiB/s-4476KiB/s (2052kB/s-4583kB/s), io=12.4MiB (13.0MB), run=1001-1022msec 00:19:23.117 00:19:23.117 Disk stats (read/write): 00:19:23.117 nvme0n1: ios=837/1024, merge=0/0, ticks=776/278, in_queue=1054, util=97.69% 00:19:23.117 nvme0n2: ios=37/512, merge=0/0, ticks=1555/202, in_queue=1757, util=98.04% 00:19:23.117 nvme0n3: ios=528/1024, merge=0/0, ticks=500/275, in_queue=775, util=87.60% 00:19:23.117 nvme0n4: ios=13/512, merge=0/0, ticks=548/214, in_queue=762, util=89.15% 00:19:23.117 22:18:17 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:23.117 [global] 00:19:23.117 thread=1 00:19:23.117 invalidate=1 00:19:23.117 rw=randwrite 00:19:23.117 time_based=1 00:19:23.117 runtime=1 00:19:23.117 ioengine=libaio 00:19:23.117 direct=1 00:19:23.117 bs=4096 00:19:23.117 iodepth=1 00:19:23.117 norandommap=0 00:19:23.117 numjobs=1 00:19:23.117 00:19:23.117 verify_dump=1 00:19:23.117 verify_backlog=512 00:19:23.117 verify_state_save=0 00:19:23.117 do_verify=1 00:19:23.117 verify=crc32c-intel 00:19:23.117 [job0] 00:19:23.117 filename=/dev/nvme0n1 00:19:23.117 [job1] 00:19:23.117 filename=/dev/nvme0n2 00:19:23.117 [job2] 00:19:23.117 filename=/dev/nvme0n3 00:19:23.117 [job3] 00:19:23.117 filename=/dev/nvme0n4 00:19:23.117 Could not set queue depth (nvme0n1) 00:19:23.117 Could not set queue depth (nvme0n2) 00:19:23.117 Could not set queue depth (nvme0n3) 00:19:23.117 Could not set queue depth (nvme0n4) 00:19:23.375 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.375 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.375 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.375 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:23.375 fio-3.35 00:19:23.375 Starting 4 threads 00:19:24.747 00:19:24.747 job0: (groupid=0, jobs=1): err= 0: pid=3575817: Wed Jul 24 22:18:19 2024 00:19:24.747 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:24.747 slat (nsec): min=6336, max=29936, avg=7124.30, stdev=1067.08 00:19:24.747 clat (usec): min=331, max=831, avg=520.43, stdev=57.58 00:19:24.747 lat (usec): min=337, max=837, avg=527.55, stdev=57.59 00:19:24.747 clat percentiles (usec): 00:19:24.747 | 1.00th=[ 359], 5.00th=[ 392], 10.00th=[ 437], 20.00th=[ 498], 00:19:24.747 | 30.00th=[ 510], 40.00th=[ 519], 50.00th=[ 529], 60.00th=[ 537], 00:19:24.747 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 578], 95.00th=[ 578], 00:19:24.747 | 99.00th=[ 644], 99.50th=[ 701], 99.90th=[ 783], 99.95th=[ 832], 00:19:24.747 | 99.99th=[ 832] 00:19:24.747 write: IOPS=1520, BW=6082KiB/s (6228kB/s)(6088KiB/1001msec); 0 zone resets 00:19:24.747 slat (usec): min=3, max=290, avg=10.56, stdev= 9.66 00:19:24.747 clat (usec): min=219, max=2972, avg=288.12, stdev=136.30 00:19:24.747 lat (usec): min=229, max=2976, avg=298.68, stdev=137.71 00:19:24.747 clat percentiles (usec): 00:19:24.747 | 1.00th=[ 223], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 231], 00:19:24.747 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:19:24.747 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 396], 95.00th=[ 594], 00:19:24.747 | 99.00th=[ 799], 99.50th=[ 848], 99.90th=[ 1074], 99.95th=[ 2966], 00:19:24.747 | 99.99th=[ 2966] 00:19:24.747 bw ( KiB/s): min= 5488, max= 5488, per=32.18%, avg=5488.00, stdev= 0.00, samples=1 00:19:24.747 iops : min= 1372, max= 1372, avg=1372.00, stdev= 0.00, samples=1 00:19:24.747 lat (usec) : 250=37.71%, 500=26.32%, 750=34.88%, 1000=1.02% 00:19:24.747 lat (msec) : 2=0.04%, 4=0.04% 00:19:24.747 cpu : usr=1.10%, sys=2.50%, ctx=2548, majf=0, minf=2 00:19:24.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.747 issued rwts: total=1024,1522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.747 job1: (groupid=0, jobs=1): err= 0: pid=3575818: Wed Jul 24 22:18:19 2024 00:19:24.747 read: IOPS=552, BW=2211KiB/s (2264kB/s)(2224KiB/1006msec) 00:19:24.747 slat (nsec): min=6432, max=35775, avg=7470.14, stdev=1838.59 00:19:24.747 clat (usec): min=414, max=42430, avg=1247.68, stdev=4942.48 00:19:24.747 lat (usec): min=421, max=42437, avg=1255.16, stdev=4942.66 00:19:24.747 clat percentiles (usec): 00:19:24.747 | 1.00th=[ 437], 5.00th=[ 465], 10.00th=[ 490], 20.00th=[ 529], 00:19:24.747 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:19:24.747 | 70.00th=[ 701], 80.00th=[ 816], 90.00th=[ 955], 95.00th=[ 1057], 00:19:24.747 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:24.747 | 99.99th=[42206] 00:19:24.747 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:19:24.747 slat (nsec): min=6051, max=30319, avg=10460.18, stdev=1735.96 00:19:24.747 clat (usec): min=213, max=789, avg=286.66, stdev=82.57 00:19:24.747 lat (usec): min=226, max=815, avg=297.12, stdev=82.60 00:19:24.747 clat percentiles (usec): 00:19:24.747 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 227], 20.00th=[ 231], 00:19:24.747 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 269], 00:19:24.747 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 396], 95.00th=[ 461], 00:19:24.747 | 99.00th=[ 603], 99.50th=[ 619], 99.90th=[ 668], 99.95th=[ 791], 00:19:24.747 | 99.99th=[ 791] 00:19:24.747 bw ( KiB/s): min= 4096, max= 4096, per=24.02%, avg=4096.00, stdev= 0.00, samples=2 00:19:24.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:19:24.747 lat (usec) : 250=27.41%, 500=39.05%, 750=24.68%, 1000=5.70% 00:19:24.747 lat (msec) : 2=2.66%, 50=0.51% 00:19:24.747 cpu : usr=0.70%, sys=1.59%, ctx=1582, majf=0, minf=1 00:19:24.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.747 issued rwts: total=556,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.747 job2: (groupid=0, jobs=1): err= 0: pid=3575820: Wed Jul 24 22:18:19 2024 00:19:24.747 read: IOPS=21, BW=84.5KiB/s (86.5kB/s)(88.0KiB/1042msec) 00:19:24.747 slat (nsec): min=9708, max=24374, avg=15851.36, stdev=5668.52 00:19:24.747 clat (usec): min=577, max=42927, avg=40047.41, stdev=8826.31 00:19:24.747 lat (usec): min=601, max=42950, avg=40063.26, stdev=8824.49 00:19:24.747 clat percentiles (usec): 00:19:24.747 | 1.00th=[ 578], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:19:24.747 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:19:24.747 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:19:24.747 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:19:24.747 | 99.99th=[42730] 00:19:24.747 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:19:24.747 slat (nsec): min=9507, max=36123, avg=11062.12, stdev=1772.36 00:19:24.747 clat (usec): min=226, max=831, avg=299.38, stdev=91.04 00:19:24.747 lat (usec): min=241, max=868, avg=310.45, stdev=91.13 00:19:24.747 clat percentiles (usec): 00:19:24.747 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 245], 00:19:24.747 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:19:24.747 | 70.00th=[ 289], 80.00th=[ 330], 90.00th=[ 420], 95.00th=[ 545], 00:19:24.747 | 99.00th=[ 603], 99.50th=[ 627], 99.90th=[ 832], 99.95th=[ 832], 00:19:24.747 | 99.99th=[ 832] 00:19:24.747 bw ( KiB/s): min= 4096, max= 4096, per=24.02%, avg=4096.00, stdev= 0.00, samples=1 00:19:24.747 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:24.747 lat (usec) : 250=27.90%, 500=61.42%, 750=6.55%, 1000=0.19% 00:19:24.747 lat (msec) : 50=3.93% 00:19:24.747 cpu : usr=0.29%, sys=0.58%, ctx=535, majf=0, minf=1 00:19:24.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.747 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.747 job3: (groupid=0, jobs=1): err= 0: pid=3575821: Wed Jul 24 22:18:19 2024 00:19:24.747 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:24.747 slat (nsec): min=7489, max=41969, avg=8707.22, stdev=1713.07 00:19:24.747 clat (usec): min=345, max=1102, avg=547.39, stdev=71.86 00:19:24.747 lat (usec): min=353, max=1112, avg=556.09, stdev=72.07 00:19:24.747 clat percentiles (usec): 00:19:24.747 | 1.00th=[ 359], 5.00th=[ 412], 10.00th=[ 490], 20.00th=[ 515], 00:19:24.747 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:19:24.747 | 70.00th=[ 562], 80.00th=[ 586], 90.00th=[ 619], 95.00th=[ 652], 00:19:24.747 | 99.00th=[ 783], 99.50th=[ 857], 99.90th=[ 938], 99.95th=[ 1106], 00:19:24.747 | 99.99th=[ 1106] 00:19:24.747 write: IOPS=1383, BW=5534KiB/s (5667kB/s)(5540KiB/1001msec); 0 zone resets 00:19:24.747 slat (usec): min=8, max=309, avg=12.85, stdev= 8.32 00:19:24.747 clat (usec): min=220, max=979, avg=292.90, stdev=98.41 00:19:24.747 lat (usec): min=232, max=1098, avg=305.75, stdev=100.69 00:19:24.747 clat percentiles (usec): 00:19:24.747 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 239], 00:19:24.747 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:19:24.747 | 70.00th=[ 281], 80.00th=[ 310], 90.00th=[ 404], 95.00th=[ 506], 00:19:24.747 | 99.00th=[ 717], 99.50th=[ 807], 99.90th=[ 840], 99.95th=[ 979], 00:19:24.747 | 99.99th=[ 979] 00:19:24.747 bw ( KiB/s): min= 5088, max= 5088, per=29.83%, avg=5088.00, stdev= 0.00, samples=1 00:19:24.747 iops : min= 1272, max= 1272, avg=1272.00, stdev= 0.00, samples=1 00:19:24.747 lat (usec) : 250=25.57%, 500=34.12%, 750=39.06%, 1000=1.20% 00:19:24.747 lat (msec) : 2=0.04% 00:19:24.747 cpu : usr=2.60%, sys=3.60%, ctx=2411, majf=0, minf=1 00:19:24.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.747 issued rwts: total=1024,1385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.747 00:19:24.747 Run status group 0 (all jobs): 00:19:24.747 READ: bw=9.84MiB/s (10.3MB/s), 84.5KiB/s-4092KiB/s (86.5kB/s-4190kB/s), io=10.3MiB (10.8MB), run=1001-1042msec 00:19:24.747 WRITE: bw=16.7MiB/s (17.5MB/s), 1965KiB/s-6082KiB/s (2013kB/s-6228kB/s), io=17.4MiB (18.2MB), run=1001-1042msec 00:19:24.747 00:19:24.747 Disk stats (read/write): 00:19:24.747 nvme0n1: ios=1043/1024, merge=0/0, ticks=780/295, in_queue=1075, util=95.89% 00:19:24.747 nvme0n2: ios=535/1010, merge=0/0, ticks=1508/276, in_queue=1784, util=99.29% 00:19:24.747 nvme0n3: ios=40/512, merge=0/0, ticks=1637/152, in_queue=1789, util=99.38% 00:19:24.747 nvme0n4: ios=1009/1024, merge=0/0, ticks=710/283, in_queue=993, util=98.74% 00:19:24.747 22:18:19 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:24.747 [global] 00:19:24.747 thread=1 00:19:24.747 invalidate=1 00:19:24.747 rw=write 00:19:24.747 time_based=1 00:19:24.747 runtime=1 00:19:24.747 ioengine=libaio 00:19:24.747 direct=1 00:19:24.747 bs=4096 00:19:24.747 iodepth=128 00:19:24.747 norandommap=0 00:19:24.747 numjobs=1 00:19:24.747 00:19:24.747 verify_dump=1 00:19:24.747 verify_backlog=512 00:19:24.747 verify_state_save=0 00:19:24.747 do_verify=1 00:19:24.747 verify=crc32c-intel 00:19:24.747 [job0] 00:19:24.747 filename=/dev/nvme0n1 00:19:24.747 [job1] 00:19:24.747 filename=/dev/nvme0n2 00:19:24.747 [job2] 00:19:24.747 filename=/dev/nvme0n3 00:19:24.747 [job3] 00:19:24.747 filename=/dev/nvme0n4 00:19:24.747 Could not set queue depth (nvme0n1) 00:19:24.747 Could not set queue depth (nvme0n2) 00:19:24.747 Could not set queue depth (nvme0n3) 00:19:24.747 Could not set queue depth (nvme0n4) 00:19:24.747 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.747 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.747 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.747 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.747 fio-3.35 00:19:24.747 Starting 4 threads 00:19:26.119 00:19:26.119 job0: (groupid=0, jobs=1): err= 0: pid=3576191: Wed Jul 24 22:18:21 2024 00:19:26.119 read: IOPS=5570, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1011msec) 00:19:26.119 slat (nsec): min=1481, max=14817k, avg=78859.78, stdev=493067.48 00:19:26.119 clat (usec): min=4186, max=25331, avg=10653.24, stdev=3390.65 00:19:26.119 lat (usec): min=4190, max=25343, avg=10732.10, stdev=3406.54 00:19:26.119 clat percentiles (usec): 00:19:26.119 | 1.00th=[ 4621], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7767], 00:19:26.119 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[11207], 00:19:26.119 | 70.00th=[12125], 80.00th=[13435], 90.00th=[14877], 95.00th=[16057], 00:19:26.120 | 99.00th=[22152], 99.50th=[22938], 99.90th=[25035], 99.95th=[25035], 00:19:26.120 | 99.99th=[25297] 00:19:26.120 write: IOPS=5866, BW=22.9MiB/s (24.0MB/s)(23.2MiB/1011msec); 0 zone resets 00:19:26.120 slat (usec): min=2, max=9770, avg=89.97, stdev=436.37 00:19:26.120 clat (usec): min=1637, max=48056, avg=11502.65, stdev=4859.37 00:19:26.120 lat (usec): min=1651, max=48064, avg=11592.62, stdev=4880.47 00:19:26.120 clat percentiles (usec): 00:19:26.120 | 1.00th=[ 5080], 5.00th=[ 6652], 10.00th=[ 7177], 20.00th=[ 9110], 00:19:26.120 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:19:26.120 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13566], 95.00th=[15926], 00:19:26.120 | 99.00th=[38536], 99.50th=[40633], 99.90th=[44827], 99.95th=[47973], 00:19:26.120 | 99.99th=[47973] 00:19:26.120 bw ( KiB/s): min=22776, max=23648, per=34.94%, avg=23212.00, stdev=616.60, samples=2 00:19:26.120 iops : min= 5694, max= 5912, avg=5803.00, stdev=154.15, samples=2 00:19:26.120 lat (msec) : 2=0.02%, 4=0.02%, 10=37.70%, 20=59.60%, 50=2.67% 00:19:26.120 cpu : usr=3.47%, sys=3.07%, ctx=833, majf=0, minf=1 00:19:26.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:26.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.120 issued rwts: total=5632,5931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.120 job1: (groupid=0, jobs=1): err= 0: pid=3576192: Wed Jul 24 22:18:21 2024 00:19:26.120 read: IOPS=3893, BW=15.2MiB/s (15.9MB/s)(15.5MiB/1016msec) 00:19:26.120 slat (nsec): min=1032, max=43179k, avg=114759.93, stdev=1054983.12 00:19:26.120 clat (usec): min=1755, max=68966, avg=17419.65, stdev=11819.98 00:19:26.120 lat (usec): min=1763, max=75029, avg=17534.41, stdev=11898.97 00:19:26.120 clat percentiles (usec): 00:19:26.120 | 1.00th=[ 2507], 5.00th=[ 5735], 10.00th=[ 7504], 20.00th=[ 8848], 00:19:26.120 | 30.00th=[10421], 40.00th=[11600], 50.00th=[13829], 60.00th=[15401], 00:19:26.120 | 70.00th=[18220], 80.00th=[23462], 90.00th=[34341], 95.00th=[47449], 00:19:26.120 | 99.00th=[54789], 99.50th=[62129], 99.90th=[68682], 99.95th=[68682], 00:19:26.120 | 99.99th=[68682] 00:19:26.120 write: IOPS=4031, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1016msec); 0 zone resets 00:19:26.120 slat (nsec): min=1813, max=16046k, avg=107446.66, stdev=751504.03 00:19:26.120 clat (usec): min=1584, max=76966, avg=14660.04, stdev=10212.32 00:19:26.120 lat (usec): min=1600, max=76975, avg=14767.49, stdev=10253.76 00:19:26.120 clat percentiles (usec): 00:19:26.120 | 1.00th=[ 3294], 5.00th=[ 5145], 10.00th=[ 6915], 20.00th=[ 8291], 00:19:26.120 | 30.00th=[ 9896], 40.00th=[11600], 50.00th=[12780], 60.00th=[14353], 00:19:26.120 | 70.00th=[15795], 80.00th=[17957], 90.00th=[22152], 95.00th=[25822], 00:19:26.120 | 99.00th=[66323], 99.50th=[71828], 99.90th=[72877], 99.95th=[77071], 00:19:26.120 | 99.99th=[77071] 00:19:26.120 bw ( KiB/s): min=13736, max=19032, per=24.66%, avg=16384.00, stdev=3744.84, samples=2 00:19:26.120 iops : min= 3434, max= 4758, avg=4096.00, stdev=936.21, samples=2 00:19:26.120 lat (msec) : 2=0.06%, 4=2.55%, 10=25.88%, 20=51.29%, 50=17.52% 00:19:26.120 lat (msec) : 100=2.69% 00:19:26.120 cpu : usr=1.97%, sys=3.65%, ctx=502, majf=0, minf=1 00:19:26.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:26.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.120 issued rwts: total=3956,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.120 job2: (groupid=0, jobs=1): err= 0: pid=3576193: Wed Jul 24 22:18:21 2024 00:19:26.120 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:19:26.120 slat (nsec): min=1080, max=21712k, avg=115329.07, stdev=818410.86 00:19:26.120 clat (usec): min=2378, max=34360, avg=16065.41, stdev=6665.57 00:19:26.120 lat (usec): min=2381, max=38992, avg=16180.74, stdev=6711.12 00:19:26.120 clat percentiles (usec): 00:19:26.120 | 1.00th=[ 5276], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[10683], 00:19:26.120 | 30.00th=[11469], 40.00th=[12518], 50.00th=[14091], 60.00th=[16188], 00:19:26.120 | 70.00th=[19006], 80.00th=[21365], 90.00th=[26608], 95.00th=[29492], 00:19:26.120 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:19:26.120 | 99.99th=[34341] 00:19:26.120 write: IOPS=3825, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1004msec); 0 zone resets 00:19:26.120 slat (nsec): min=1901, max=25615k, avg=141475.90, stdev=906790.76 00:19:26.120 clat (usec): min=1606, max=51890, avg=18163.24, stdev=9576.59 00:19:26.120 lat (usec): min=1617, max=51899, avg=18304.72, stdev=9629.71 00:19:26.120 clat percentiles (usec): 00:19:26.120 | 1.00th=[ 3589], 5.00th=[ 6587], 10.00th=[ 9110], 20.00th=[11076], 00:19:26.120 | 30.00th=[12125], 40.00th=[12911], 50.00th=[15270], 60.00th=[17957], 00:19:26.120 | 70.00th=[21365], 80.00th=[24773], 90.00th=[32375], 95.00th=[38536], 00:19:26.120 | 99.00th=[46924], 99.50th=[47449], 99.90th=[50594], 99.95th=[51643], 00:19:26.120 | 99.99th=[51643] 00:19:26.120 bw ( KiB/s): min=12704, max=17008, per=22.36%, avg=14856.00, stdev=3043.39, samples=2 00:19:26.120 iops : min= 3176, max= 4252, avg=3714.00, stdev=760.85, samples=2 00:19:26.120 lat (msec) : 2=0.15%, 4=0.66%, 10=12.40%, 20=56.92%, 50=29.82% 00:19:26.120 lat (msec) : 100=0.05% 00:19:26.120 cpu : usr=2.59%, sys=2.09%, ctx=506, majf=0, minf=1 00:19:26.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:26.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.120 issued rwts: total=3584,3841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.120 job3: (groupid=0, jobs=1): err= 0: pid=3576194: Wed Jul 24 22:18:21 2024 00:19:26.120 read: IOPS=2562, BW=10.0MiB/s (10.5MB/s)(10.2MiB/1020msec) 00:19:26.120 slat (nsec): min=1585, max=24634k, avg=179722.70, stdev=1265121.66 00:19:26.120 clat (usec): min=7391, max=51078, avg=22371.49, stdev=7278.13 00:19:26.120 lat (usec): min=7395, max=51089, avg=22551.21, stdev=7352.58 00:19:26.120 clat percentiles (usec): 00:19:26.120 | 1.00th=[ 9634], 5.00th=[12911], 10.00th=[13960], 20.00th=[16581], 00:19:26.120 | 30.00th=[19006], 40.00th=[19792], 50.00th=[21627], 60.00th=[22676], 00:19:26.120 | 70.00th=[24249], 80.00th=[28443], 90.00th=[33162], 95.00th=[36439], 00:19:26.120 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[44303], 00:19:26.120 | 99.99th=[51119] 00:19:26.120 write: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1020msec); 0 zone resets 00:19:26.120 slat (usec): min=2, max=20368, avg=168.23, stdev=1003.58 00:19:26.120 clat (usec): min=3691, max=60874, avg=23089.55, stdev=11887.27 00:19:26.120 lat (usec): min=3705, max=60884, avg=23257.78, stdev=11934.44 00:19:26.120 clat percentiles (usec): 00:19:26.120 | 1.00th=[ 6456], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[12518], 00:19:26.120 | 30.00th=[15401], 40.00th=[17695], 50.00th=[21103], 60.00th=[22938], 00:19:26.120 | 70.00th=[27395], 80.00th=[31589], 90.00th=[38011], 95.00th=[47449], 00:19:26.120 | 99.00th=[60031], 99.50th=[60556], 99.90th=[61080], 99.95th=[61080], 00:19:26.120 | 99.99th=[61080] 00:19:26.120 bw ( KiB/s): min=11224, max=12760, per=18.05%, avg=11992.00, stdev=1086.12, samples=2 00:19:26.120 iops : min= 2806, max= 3190, avg=2998.00, stdev=271.53, samples=2 00:19:26.120 lat (msec) : 4=0.12%, 10=5.56%, 20=39.15%, 50=52.67%, 100=2.50% 00:19:26.120 cpu : usr=2.94%, sys=2.55%, ctx=366, majf=0, minf=1 00:19:26.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:26.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.120 issued rwts: total=2614,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.120 00:19:26.120 Run status group 0 (all jobs): 00:19:26.120 READ: bw=60.5MiB/s (63.4MB/s), 10.0MiB/s-21.8MiB/s (10.5MB/s-22.8MB/s), io=61.7MiB (64.7MB), run=1004-1020msec 00:19:26.120 WRITE: bw=64.9MiB/s (68.0MB/s), 11.8MiB/s-22.9MiB/s (12.3MB/s-24.0MB/s), io=66.2MiB (69.4MB), run=1004-1020msec 00:19:26.120 00:19:26.120 Disk stats (read/write): 00:19:26.120 nvme0n1: ios=5005/5120, merge=0/0, ticks=52085/53510, in_queue=105595, util=92.89% 00:19:26.120 nvme0n2: ios=3110/3584, merge=0/0, ticks=49863/51804, in_queue=101667, util=89.23% 00:19:26.120 nvme0n3: ios=2741/3072, merge=0/0, ticks=37013/41799, in_queue=78812, util=88.95% 00:19:26.120 nvme0n4: ios=2422/2560, merge=0/0, ticks=53287/50967, in_queue=104254, util=89.70% 00:19:26.120 22:18:21 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:26.120 [global] 00:19:26.120 thread=1 00:19:26.120 invalidate=1 00:19:26.120 rw=randwrite 00:19:26.120 time_based=1 00:19:26.120 runtime=1 00:19:26.120 ioengine=libaio 00:19:26.120 direct=1 00:19:26.120 bs=4096 00:19:26.120 iodepth=128 00:19:26.120 norandommap=0 00:19:26.120 numjobs=1 00:19:26.120 00:19:26.120 verify_dump=1 00:19:26.120 verify_backlog=512 00:19:26.120 verify_state_save=0 00:19:26.120 do_verify=1 00:19:26.120 verify=crc32c-intel 00:19:26.120 [job0] 00:19:26.120 filename=/dev/nvme0n1 00:19:26.120 [job1] 00:19:26.120 filename=/dev/nvme0n2 00:19:26.120 [job2] 00:19:26.120 filename=/dev/nvme0n3 00:19:26.120 [job3] 00:19:26.120 filename=/dev/nvme0n4 00:19:26.120 Could not set queue depth (nvme0n1) 00:19:26.120 Could not set queue depth (nvme0n2) 00:19:26.120 Could not set queue depth (nvme0n3) 00:19:26.120 Could not set queue depth (nvme0n4) 00:19:26.377 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.377 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.377 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.377 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.377 fio-3.35 00:19:26.377 Starting 4 threads 00:19:27.751 00:19:27.751 job0: (groupid=0, jobs=1): err= 0: pid=3576572: Wed Jul 24 22:18:22 2024 00:19:27.751 read: IOPS=4015, BW=15.7MiB/s (16.4MB/s)(16.0MiB/1020msec) 00:19:27.751 slat (nsec): min=1164, max=8766.4k, avg=101356.26, stdev=589078.47 00:19:27.751 clat (usec): min=5251, max=32944, avg=12438.72, stdev=5700.21 00:19:27.751 lat (usec): min=5253, max=32946, avg=12540.08, stdev=5732.72 00:19:27.751 clat percentiles (usec): 00:19:27.751 | 1.00th=[ 5866], 5.00th=[ 6980], 10.00th=[ 7308], 20.00th=[ 8094], 00:19:27.751 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[12125], 00:19:27.751 | 70.00th=[13435], 80.00th=[16581], 90.00th=[21627], 95.00th=[25297], 00:19:27.751 | 99.00th=[29754], 99.50th=[31065], 99.90th=[32900], 99.95th=[32900], 00:19:27.751 | 99.99th=[32900] 00:19:27.751 write: IOPS=4440, BW=17.3MiB/s (18.2MB/s)(17.7MiB/1020msec); 0 zone resets 00:19:27.751 slat (usec): min=2, max=15705, avg=124.99, stdev=552.69 00:19:27.751 clat (usec): min=1990, max=38891, avg=17196.56, stdev=6284.06 00:19:27.751 lat (usec): min=1998, max=38946, avg=17321.55, stdev=6307.39 00:19:27.751 clat percentiles (usec): 00:19:27.751 | 1.00th=[ 4293], 5.00th=[ 6849], 10.00th=[ 8586], 20.00th=[11076], 00:19:27.751 | 30.00th=[14353], 40.00th=[16712], 50.00th=[18220], 60.00th=[19530], 00:19:27.751 | 70.00th=[20055], 80.00th=[20841], 90.00th=[22414], 95.00th=[28181], 00:19:27.751 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[38536], 00:19:27.751 | 99.99th=[39060] 00:19:27.751 bw ( KiB/s): min=16384, max=18832, per=27.13%, avg=17608.00, stdev=1731.00, samples=2 00:19:27.751 iops : min= 4096, max= 4708, avg=4402.00, stdev=432.75, samples=2 00:19:27.751 lat (msec) : 2=0.02%, 4=0.22%, 10=30.96%, 20=46.78%, 50=22.02% 00:19:27.751 cpu : usr=1.57%, sys=4.02%, ctx=801, majf=0, minf=1 00:19:27.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:27.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.751 issued rwts: total=4096,4529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.751 job1: (groupid=0, jobs=1): err= 0: pid=3576573: Wed Jul 24 22:18:22 2024 00:19:27.751 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:19:27.751 slat (nsec): min=1076, max=10774k, avg=93320.39, stdev=506206.73 00:19:27.751 clat (usec): min=4909, max=62386, avg=12459.06, stdev=5791.49 00:19:27.751 lat (usec): min=4914, max=65451, avg=12552.38, stdev=5832.01 00:19:27.751 clat percentiles (usec): 00:19:27.751 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9372], 00:19:27.752 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10552], 60.00th=[11076], 00:19:27.752 | 70.00th=[12125], 80.00th=[14091], 90.00th=[17695], 95.00th=[23725], 00:19:27.752 | 99.00th=[38011], 99.50th=[44827], 99.90th=[62129], 99.95th=[62129], 00:19:27.752 | 99.99th=[62129] 00:19:27.752 write: IOPS=4116, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1002msec); 0 zone resets 00:19:27.752 slat (nsec): min=1949, max=33323k, avg=146476.24, stdev=853426.06 00:19:27.752 clat (usec): min=903, max=63645, avg=18069.81, stdev=10842.35 00:19:27.752 lat (usec): min=4579, max=64555, avg=18216.28, stdev=10907.56 00:19:27.752 clat percentiles (usec): 00:19:27.752 | 1.00th=[ 7373], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11338], 00:19:27.752 | 30.00th=[12387], 40.00th=[13304], 50.00th=[14222], 60.00th=[15270], 00:19:27.752 | 70.00th=[16909], 80.00th=[22676], 90.00th=[30802], 95.00th=[46400], 00:19:27.752 | 99.00th=[58459], 99.50th=[61080], 99.90th=[63701], 99.95th=[63701], 00:19:27.752 | 99.99th=[63701] 00:19:27.752 bw ( KiB/s): min=12432, max=12432, per=19.16%, avg=12432.00, stdev= 0.00, samples=1 00:19:27.752 iops : min= 3108, max= 3108, avg=3108.00, stdev= 0.00, samples=1 00:19:27.752 lat (usec) : 1000=0.01% 00:19:27.752 lat (msec) : 10=23.66%, 20=60.66%, 50=13.67%, 100=1.99% 00:19:27.752 cpu : usr=1.60%, sys=2.60%, ctx=758, majf=0, minf=1 00:19:27.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:27.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.752 issued rwts: total=4096,4125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.752 job2: (groupid=0, jobs=1): err= 0: pid=3576574: Wed Jul 24 22:18:22 2024 00:19:27.752 read: IOPS=4509, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1002msec) 00:19:27.752 slat (nsec): min=1057, max=16147k, avg=102226.27, stdev=755358.50 00:19:27.752 clat (usec): min=1602, max=40883, avg=15021.77, stdev=4961.32 00:19:27.752 lat (usec): min=1613, max=40886, avg=15123.99, stdev=4993.01 00:19:27.752 clat percentiles (usec): 00:19:27.752 | 1.00th=[ 3425], 5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[11207], 00:19:27.752 | 30.00th=[12518], 40.00th=[13435], 50.00th=[14615], 60.00th=[15926], 00:19:27.752 | 70.00th=[16909], 80.00th=[18482], 90.00th=[20841], 95.00th=[23200], 00:19:27.752 | 99.00th=[31589], 99.50th=[36963], 99.90th=[39584], 99.95th=[40633], 00:19:27.752 | 99.99th=[40633] 00:19:27.752 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:19:27.752 slat (nsec): min=1902, max=10279k, avg=91887.13, stdev=619451.20 00:19:27.752 clat (usec): min=522, max=40886, avg=12825.31, stdev=5271.98 00:19:27.752 lat (usec): min=529, max=40890, avg=12917.20, stdev=5288.87 00:19:27.752 clat percentiles (usec): 00:19:27.752 | 1.00th=[ 1303], 5.00th=[ 4948], 10.00th=[ 6652], 20.00th=[ 8848], 00:19:27.752 | 30.00th=[10290], 40.00th=[11338], 50.00th=[12387], 60.00th=[13566], 00:19:27.752 | 70.00th=[14615], 80.00th=[16909], 90.00th=[19268], 95.00th=[21890], 00:19:27.752 | 99.00th=[28705], 99.50th=[29754], 99.90th=[30540], 99.95th=[40633], 00:19:27.752 | 99.99th=[40633] 00:19:27.752 bw ( KiB/s): min=20480, max=20480, per=31.56%, avg=20480.00, stdev= 0.00, samples=1 00:19:27.752 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:27.752 lat (usec) : 750=0.03%, 1000=0.14% 00:19:27.752 lat (msec) : 2=1.10%, 4=1.40%, 10=17.15%, 20=69.00%, 50=11.18% 00:19:27.752 cpu : usr=2.60%, sys=4.60%, ctx=469, majf=0, minf=1 00:19:27.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:27.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.752 issued rwts: total=4519,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.752 job3: (groupid=0, jobs=1): err= 0: pid=3576576: Wed Jul 24 22:18:22 2024 00:19:27.752 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:19:27.752 slat (nsec): min=1602, max=20697k, avg=149098.31, stdev=1055712.64 00:19:27.752 clat (usec): min=7546, max=67010, avg=19742.62, stdev=9360.81 00:19:27.752 lat (usec): min=7549, max=70981, avg=19891.72, stdev=9452.46 00:19:27.752 clat percentiles (usec): 00:19:27.752 | 1.00th=[ 8848], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11731], 00:19:27.752 | 30.00th=[13042], 40.00th=[15139], 50.00th=[17171], 60.00th=[20055], 00:19:27.752 | 70.00th=[22938], 80.00th=[27395], 90.00th=[31327], 95.00th=[35914], 00:19:27.752 | 99.00th=[54264], 99.50th=[55837], 99.90th=[66847], 99.95th=[66847], 00:19:27.752 | 99.99th=[66847] 00:19:27.752 write: IOPS=3273, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1004msec); 0 zone resets 00:19:27.752 slat (usec): min=2, max=22256, avg=157.79, stdev=901.34 00:19:27.752 clat (usec): min=1672, max=75690, avg=20244.06, stdev=11793.93 00:19:27.752 lat (usec): min=1698, max=75703, avg=20401.86, stdev=11864.19 00:19:27.752 clat percentiles (usec): 00:19:27.752 | 1.00th=[ 5276], 5.00th=[10814], 10.00th=[11207], 20.00th=[12125], 00:19:27.752 | 30.00th=[13698], 40.00th=[15139], 50.00th=[16712], 60.00th=[18744], 00:19:27.752 | 70.00th=[21365], 80.00th=[24249], 90.00th=[34341], 95.00th=[51119], 00:19:27.752 | 99.00th=[66847], 99.50th=[68682], 99.90th=[76022], 99.95th=[76022], 00:19:27.752 | 99.99th=[76022] 00:19:27.752 bw ( KiB/s): min= 8896, max=16384, per=19.48%, avg=12640.00, stdev=5294.82, samples=2 00:19:27.752 iops : min= 2224, max= 4096, avg=3160.00, stdev=1323.70, samples=2 00:19:27.752 lat (msec) : 2=0.03%, 4=0.02%, 10=4.50%, 20=57.81%, 50=34.11% 00:19:27.752 lat (msec) : 100=3.54% 00:19:27.752 cpu : usr=3.39%, sys=2.09%, ctx=449, majf=0, minf=1 00:19:27.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:27.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:27.752 issued rwts: total=3072,3287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:27.752 00:19:27.752 Run status group 0 (all jobs): 00:19:27.752 READ: bw=60.4MiB/s (63.4MB/s), 12.0MiB/s-17.6MiB/s (12.5MB/s-18.5MB/s), io=61.7MiB (64.6MB), run=1002-1020msec 00:19:27.752 WRITE: bw=63.4MiB/s (66.5MB/s), 12.8MiB/s-18.0MiB/s (13.4MB/s-18.8MB/s), io=64.6MiB (67.8MB), run=1002-1020msec 00:19:27.752 00:19:27.752 Disk stats (read/write): 00:19:27.752 nvme0n1: ios=3585/3584, merge=0/0, ticks=32302/46784, in_queue=79086, util=96.49% 00:19:27.752 nvme0n2: ios=3123/3385, merge=0/0, ticks=15356/23535, in_queue=38891, util=98.78% 00:19:27.752 nvme0n3: ios=3627/4096, merge=0/0, ticks=49505/47277, in_queue=96782, util=98.75% 00:19:27.752 nvme0n4: ios=2789/3072, merge=0/0, ticks=25237/30909, in_queue=56146, util=97.38% 00:19:27.752 22:18:22 -- target/fio.sh@55 -- # sync 00:19:27.752 22:18:22 -- target/fio.sh@59 -- # fio_pid=3576811 00:19:27.752 22:18:22 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:27.752 22:18:22 -- target/fio.sh@61 -- # sleep 3 00:19:27.752 [global] 00:19:27.752 thread=1 00:19:27.752 invalidate=1 00:19:27.752 rw=read 00:19:27.752 time_based=1 00:19:27.752 runtime=10 00:19:27.752 ioengine=libaio 00:19:27.752 direct=1 00:19:27.752 bs=4096 00:19:27.752 iodepth=1 00:19:27.752 norandommap=1 00:19:27.752 numjobs=1 00:19:27.752 00:19:27.752 [job0] 00:19:27.752 filename=/dev/nvme0n1 00:19:27.752 [job1] 00:19:27.752 filename=/dev/nvme0n2 00:19:27.752 [job2] 00:19:27.752 filename=/dev/nvme0n3 00:19:27.752 [job3] 00:19:27.752 filename=/dev/nvme0n4 00:19:27.752 Could not set queue depth (nvme0n1) 00:19:27.752 Could not set queue depth (nvme0n2) 00:19:27.752 Could not set queue depth (nvme0n3) 00:19:27.752 Could not set queue depth (nvme0n4) 00:19:28.010 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.010 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.010 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.010 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.010 fio-3.35 00:19:28.010 Starting 4 threads 00:19:30.534 22:18:25 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:30.792 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=17289216, buflen=4096 00:19:30.792 fio: pid=3576998, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:30.792 22:18:25 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:31.049 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=4460544, buflen=4096 00:19:31.049 fio: pid=3576992, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:31.049 22:18:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.049 22:18:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:31.307 22:18:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.307 22:18:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:31.307 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=2318336, buflen=4096 00:19:31.307 fio: pid=3576970, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:19:31.307 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=24580096, buflen=4096 00:19:31.307 fio: pid=3576977, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:31.307 22:18:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.307 22:18:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:31.565 00:19:31.565 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3576970: Wed Jul 24 22:18:26 2024 00:19:31.565 read: IOPS=182, BW=731KiB/s (748kB/s)(2264KiB/3099msec) 00:19:31.565 slat (usec): min=7, max=14593, avg=35.21, stdev=612.53 00:19:31.565 clat (usec): min=367, max=43027, avg=5436.97, stdev=13179.43 00:19:31.565 lat (usec): min=375, max=56962, avg=5472.20, stdev=13268.22 00:19:31.565 clat percentiles (usec): 00:19:31.565 | 1.00th=[ 482], 5.00th=[ 529], 10.00th=[ 570], 20.00th=[ 578], 00:19:31.565 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 635], 00:19:31.565 | 70.00th=[ 685], 80.00th=[ 799], 90.00th=[41681], 95.00th=[42206], 00:19:31.565 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:19:31.565 | 99.99th=[43254] 00:19:31.565 bw ( KiB/s): min= 96, max= 4032, per=6.03%, avg=883.20, stdev=1760.23, samples=5 00:19:31.565 iops : min= 24, max= 1008, avg=220.80, stdev=440.06, samples=5 00:19:31.565 lat (usec) : 500=2.29%, 750=75.49%, 1000=5.47% 00:19:31.565 lat (msec) : 2=4.76%, 4=0.18%, 50=11.64% 00:19:31.565 cpu : usr=0.13%, sys=0.32%, ctx=570, majf=0, minf=1 00:19:31.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.565 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.565 issued rwts: total=567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.565 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3576977: Wed Jul 24 22:18:26 2024 00:19:31.565 read: IOPS=1850, BW=7400KiB/s (7577kB/s)(23.4MiB/3244msec) 00:19:31.565 slat (usec): min=7, max=21685, avg=24.63, stdev=513.03 00:19:31.565 clat (usec): min=325, max=10253, avg=513.55, stdev=149.21 00:19:31.565 lat (usec): min=333, max=22434, avg=538.19, stdev=544.19 00:19:31.565 clat percentiles (usec): 00:19:31.565 | 1.00th=[ 375], 5.00th=[ 412], 10.00th=[ 429], 20.00th=[ 457], 00:19:31.565 | 30.00th=[ 486], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 515], 00:19:31.565 | 70.00th=[ 523], 80.00th=[ 545], 90.00th=[ 586], 95.00th=[ 676], 00:19:31.565 | 99.00th=[ 816], 99.50th=[ 865], 99.90th=[ 1057], 99.95th=[ 1090], 00:19:31.565 | 99.99th=[10290] 00:19:31.565 bw ( KiB/s): min= 6353, max= 8408, per=51.61%, avg=7558.83, stdev=705.41, samples=6 00:19:31.565 iops : min= 1588, max= 2102, avg=1889.67, stdev=176.44, samples=6 00:19:31.565 lat (usec) : 500=44.20%, 750=54.25%, 1000=1.23% 00:19:31.565 lat (msec) : 2=0.28%, 20=0.02% 00:19:31.565 cpu : usr=1.08%, sys=3.21%, ctx=6010, majf=0, minf=1 00:19:31.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.565 issued rwts: total=6002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.565 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3576992: Wed Jul 24 22:18:26 2024 00:19:31.565 read: IOPS=379, BW=1517KiB/s (1553kB/s)(4356KiB/2872msec) 00:19:31.565 slat (nsec): min=3482, max=36023, avg=12719.75, stdev=6863.57 00:19:31.565 clat (usec): min=337, max=42027, avg=2620.75, stdev=8711.74 00:19:31.565 lat (usec): min=346, max=42036, avg=2633.48, stdev=8711.73 00:19:31.565 clat percentiles (usec): 00:19:31.565 | 1.00th=[ 363], 5.00th=[ 478], 10.00th=[ 502], 20.00th=[ 529], 00:19:31.565 | 30.00th=[ 562], 40.00th=[ 611], 50.00th=[ 668], 60.00th=[ 734], 00:19:31.565 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 1057], 95.00th=[ 1893], 00:19:31.565 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:31.565 | 99.99th=[42206] 00:19:31.565 bw ( KiB/s): min= 96, max= 5264, per=8.85%, avg=1296.00, stdev=2247.25, samples=5 00:19:31.565 iops : min= 24, max= 1316, avg=324.00, stdev=561.81, samples=5 00:19:31.565 lat (usec) : 500=8.81%, 750=53.85%, 1000=26.51% 00:19:31.565 lat (msec) : 2=5.87%, 4=0.18%, 50=4.68% 00:19:31.565 cpu : usr=0.38%, sys=0.52%, ctx=1091, majf=0, minf=1 00:19:31.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.565 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.565 issued rwts: total=1090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.565 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3576998: Wed Jul 24 22:18:26 2024 00:19:31.565 read: IOPS=1577, BW=6309KiB/s (6461kB/s)(16.5MiB/2676msec) 00:19:31.565 slat (nsec): min=3074, max=56356, avg=8858.33, stdev=3850.02 00:19:31.565 clat (usec): min=460, max=1667, avg=622.63, stdev=100.68 00:19:31.565 lat (usec): min=467, max=1675, avg=631.49, stdev=102.13 00:19:31.565 clat percentiles (usec): 00:19:31.565 | 1.00th=[ 510], 5.00th=[ 545], 10.00th=[ 553], 20.00th=[ 562], 00:19:31.565 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 611], 00:19:31.565 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 742], 95.00th=[ 775], 00:19:31.565 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1467], 99.95th=[ 1483], 00:19:31.565 | 99.99th=[ 1663] 00:19:31.565 bw ( KiB/s): min= 5680, max= 6856, per=43.05%, avg=6304.00, stdev=430.03, samples=5 00:19:31.565 iops : min= 1420, max= 1714, avg=1576.00, stdev=107.51, samples=5 00:19:31.565 lat (usec) : 500=0.45%, 750=91.57%, 1000=6.25% 00:19:31.565 lat (msec) : 2=1.71% 00:19:31.565 cpu : usr=0.90%, sys=2.54%, ctx=4223, majf=0, minf=2 00:19:31.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.565 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.565 issued rwts: total=4222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.565 00:19:31.565 Run status group 0 (all jobs): 00:19:31.565 READ: bw=14.3MiB/s (15.0MB/s), 731KiB/s-7400KiB/s (748kB/s-7577kB/s), io=46.4MiB (48.6MB), run=2676-3244msec 00:19:31.565 00:19:31.565 Disk stats (read/write): 00:19:31.565 nvme0n1: ios=596/0, merge=0/0, ticks=3056/0, in_queue=3056, util=99.13% 00:19:31.565 nvme0n2: ios=5866/0, merge=0/0, ticks=3174/0, in_queue=3174, util=98.24% 00:19:31.565 nvme0n3: ios=1099/0, merge=0/0, ticks=3016/0, in_queue=3016, util=100.00% 00:19:31.565 nvme0n4: ios=4113/0, merge=0/0, ticks=2506/0, in_queue=2506, util=96.48% 00:19:31.565 22:18:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.565 22:18:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:31.823 22:18:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.823 22:18:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:32.080 22:18:26 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.080 22:18:26 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:32.080 22:18:27 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.080 22:18:27 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:32.338 22:18:27 -- target/fio.sh@69 -- # fio_status=0 00:19:32.338 22:18:27 -- target/fio.sh@70 -- # wait 3576811 00:19:32.338 22:18:27 -- target/fio.sh@70 -- # fio_status=4 00:19:32.338 22:18:27 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:32.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:32.338 22:18:27 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:32.338 22:18:27 -- common/autotest_common.sh@1198 -- # local i=0 00:19:32.338 22:18:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:32.338 22:18:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.338 22:18:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:32.338 22:18:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.338 22:18:27 -- common/autotest_common.sh@1210 -- # return 0 00:19:32.338 22:18:27 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:32.338 22:18:27 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:32.338 nvmf hotplug test: fio failed as expected 00:19:32.338 22:18:27 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:32.596 22:18:27 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:32.596 22:18:27 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:32.596 22:18:27 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:32.596 22:18:27 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:32.596 22:18:27 -- target/fio.sh@91 -- # nvmftestfini 00:19:32.596 22:18:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:32.596 22:18:27 -- nvmf/common.sh@116 -- # sync 00:19:32.596 22:18:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:32.596 22:18:27 -- nvmf/common.sh@119 -- # set +e 00:19:32.596 22:18:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:32.596 22:18:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:32.596 rmmod nvme_tcp 00:19:32.596 rmmod nvme_fabrics 00:19:32.596 rmmod nvme_keyring 00:19:32.596 22:18:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:32.596 22:18:27 -- nvmf/common.sh@123 -- # set -e 00:19:32.596 22:18:27 -- nvmf/common.sh@124 -- # return 0 00:19:32.596 22:18:27 -- nvmf/common.sh@477 -- # '[' -n 3573946 ']' 00:19:32.596 22:18:27 -- nvmf/common.sh@478 -- # killprocess 3573946 00:19:32.596 22:18:27 -- common/autotest_common.sh@926 -- # '[' -z 3573946 ']' 00:19:32.596 22:18:27 -- common/autotest_common.sh@930 -- # kill -0 3573946 00:19:32.596 22:18:27 -- common/autotest_common.sh@931 -- # uname 00:19:32.596 22:18:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:32.596 22:18:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3573946 00:19:32.854 22:18:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:32.854 22:18:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:32.854 22:18:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3573946' 00:19:32.854 killing process with pid 3573946 00:19:32.854 22:18:27 -- common/autotest_common.sh@945 -- # kill 3573946 00:19:32.854 22:18:27 -- common/autotest_common.sh@950 -- # wait 3573946 00:19:32.854 22:18:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:32.854 22:18:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:32.854 22:18:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:32.854 22:18:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.854 22:18:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:32.854 22:18:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.854 22:18:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.854 22:18:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.386 22:18:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:35.386 00:19:35.386 real 0m26.250s 00:19:35.386 user 1m45.934s 00:19:35.386 sys 0m7.454s 00:19:35.386 22:18:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.386 22:18:29 -- common/autotest_common.sh@10 -- # set +x 00:19:35.386 ************************************ 00:19:35.386 END TEST nvmf_fio_target 00:19:35.386 ************************************ 00:19:35.386 22:18:30 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:35.386 22:18:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:35.386 22:18:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:35.386 22:18:30 -- common/autotest_common.sh@10 -- # set +x 00:19:35.386 ************************************ 00:19:35.386 START TEST nvmf_bdevio 00:19:35.386 ************************************ 00:19:35.386 22:18:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:35.386 * Looking for test storage... 00:19:35.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:35.386 22:18:30 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.386 22:18:30 -- nvmf/common.sh@7 -- # uname -s 00:19:35.386 22:18:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.386 22:18:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.386 22:18:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.386 22:18:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.386 22:18:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.386 22:18:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.386 22:18:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.386 22:18:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.386 22:18:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.386 22:18:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.386 22:18:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.386 22:18:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:35.386 22:18:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.386 22:18:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.386 22:18:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.386 22:18:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:35.386 22:18:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.386 22:18:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.386 22:18:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.386 22:18:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.386 22:18:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.386 22:18:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.386 22:18:30 -- paths/export.sh@5 -- # export PATH 00:19:35.386 22:18:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.386 22:18:30 -- nvmf/common.sh@46 -- # : 0 00:19:35.386 22:18:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:35.386 22:18:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:35.386 22:18:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:35.386 22:18:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.386 22:18:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.386 22:18:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:35.386 22:18:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:35.386 22:18:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:35.386 22:18:30 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.386 22:18:30 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.386 22:18:30 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:35.386 22:18:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:35.386 22:18:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.386 22:18:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:35.386 22:18:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:35.386 22:18:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:35.387 22:18:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.387 22:18:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.387 22:18:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.387 22:18:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:35.387 22:18:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:35.387 22:18:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:35.387 22:18:30 -- common/autotest_common.sh@10 -- # set +x 00:19:40.655 22:18:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:40.655 22:18:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:40.655 22:18:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:40.655 22:18:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:40.655 22:18:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:40.655 22:18:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:40.655 22:18:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:40.655 22:18:35 -- nvmf/common.sh@294 -- # net_devs=() 00:19:40.655 22:18:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:40.655 22:18:35 -- nvmf/common.sh@295 -- # e810=() 00:19:40.655 22:18:35 -- nvmf/common.sh@295 -- # local -ga e810 00:19:40.655 22:18:35 -- nvmf/common.sh@296 -- # x722=() 00:19:40.655 22:18:35 -- nvmf/common.sh@296 -- # local -ga x722 00:19:40.655 22:18:35 -- nvmf/common.sh@297 -- # mlx=() 00:19:40.655 22:18:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:40.655 22:18:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.655 22:18:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:40.655 22:18:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:40.655 22:18:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:40.655 22:18:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:40.655 22:18:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:40.655 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:40.655 22:18:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:40.655 22:18:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:40.655 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:40.655 22:18:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:40.655 22:18:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:40.655 22:18:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:40.656 22:18:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.656 22:18:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:40.656 22:18:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.656 22:18:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:40.656 Found net devices under 0000:86:00.0: cvl_0_0 00:19:40.656 22:18:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.656 22:18:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:40.656 22:18:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.656 22:18:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:40.656 22:18:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.656 22:18:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:40.656 Found net devices under 0000:86:00.1: cvl_0_1 00:19:40.656 22:18:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.656 22:18:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:40.656 22:18:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:40.656 22:18:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:40.656 22:18:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:40.656 22:18:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:40.656 22:18:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.656 22:18:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.656 22:18:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.656 22:18:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:40.656 22:18:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.656 22:18:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.656 22:18:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:40.656 22:18:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.656 22:18:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.656 22:18:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:40.656 22:18:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:40.656 22:18:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.656 22:18:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.656 22:18:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.656 22:18:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.656 22:18:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:40.656 22:18:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.656 22:18:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.656 22:18:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.656 22:18:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:40.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:19:40.656 00:19:40.656 --- 10.0.0.2 ping statistics --- 00:19:40.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.656 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:19:40.656 22:18:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:19:40.656 00:19:40.656 --- 10.0.0.1 ping statistics --- 00:19:40.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.656 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:19:40.656 22:18:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.656 22:18:35 -- nvmf/common.sh@410 -- # return 0 00:19:40.656 22:18:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:40.656 22:18:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.656 22:18:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:40.656 22:18:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:40.656 22:18:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.656 22:18:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:40.656 22:18:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:40.656 22:18:35 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:40.656 22:18:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:40.656 22:18:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:40.656 22:18:35 -- common/autotest_common.sh@10 -- # set +x 00:19:40.656 22:18:35 -- nvmf/common.sh@469 -- # nvmfpid=3581219 00:19:40.656 22:18:35 -- nvmf/common.sh@470 -- # waitforlisten 3581219 00:19:40.656 22:18:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:40.656 22:18:35 -- common/autotest_common.sh@819 -- # '[' -z 3581219 ']' 00:19:40.656 22:18:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.656 22:18:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:40.656 22:18:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.656 22:18:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:40.656 22:18:35 -- common/autotest_common.sh@10 -- # set +x 00:19:40.914 [2024-07-24 22:18:35.796013] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:40.914 [2024-07-24 22:18:35.796063] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.914 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.914 [2024-07-24 22:18:35.853525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.914 [2024-07-24 22:18:35.893275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:40.914 [2024-07-24 22:18:35.893391] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.914 [2024-07-24 22:18:35.893400] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.914 [2024-07-24 22:18:35.893406] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.914 [2024-07-24 22:18:35.893523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:40.914 [2024-07-24 22:18:35.893636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:40.914 [2024-07-24 22:18:35.893742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.914 [2024-07-24 22:18:35.893744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:41.478 22:18:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:41.478 22:18:36 -- common/autotest_common.sh@852 -- # return 0 00:19:41.478 22:18:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:41.478 22:18:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:41.478 22:18:36 -- common/autotest_common.sh@10 -- # set +x 00:19:41.737 22:18:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.737 22:18:36 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.737 22:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.737 22:18:36 -- common/autotest_common.sh@10 -- # set +x 00:19:41.737 [2024-07-24 22:18:36.632413] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.737 22:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.737 22:18:36 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:41.737 22:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.737 22:18:36 -- common/autotest_common.sh@10 -- # set +x 00:19:41.737 Malloc0 00:19:41.737 22:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.737 22:18:36 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:41.737 22:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.737 22:18:36 -- common/autotest_common.sh@10 -- # set +x 00:19:41.737 22:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.737 22:18:36 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:41.737 22:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.737 22:18:36 -- common/autotest_common.sh@10 -- # set +x 00:19:41.737 22:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.737 22:18:36 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.737 22:18:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:41.737 22:18:36 -- common/autotest_common.sh@10 -- # set +x 00:19:41.737 [2024-07-24 22:18:36.676094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.737 22:18:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:41.737 22:18:36 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:41.737 22:18:36 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:41.737 22:18:36 -- nvmf/common.sh@520 -- # config=() 00:19:41.737 22:18:36 -- nvmf/common.sh@520 -- # local subsystem config 00:19:41.737 22:18:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:41.737 22:18:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:41.737 { 00:19:41.737 "params": { 00:19:41.737 "name": "Nvme$subsystem", 00:19:41.737 "trtype": "$TEST_TRANSPORT", 00:19:41.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:41.737 "adrfam": "ipv4", 00:19:41.737 "trsvcid": "$NVMF_PORT", 00:19:41.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:41.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:41.737 "hdgst": ${hdgst:-false}, 00:19:41.737 "ddgst": ${ddgst:-false} 00:19:41.737 }, 00:19:41.737 "method": "bdev_nvme_attach_controller" 00:19:41.737 } 00:19:41.737 EOF 00:19:41.737 )") 00:19:41.737 22:18:36 -- nvmf/common.sh@542 -- # cat 00:19:41.737 22:18:36 -- nvmf/common.sh@544 -- # jq . 00:19:41.737 22:18:36 -- nvmf/common.sh@545 -- # IFS=, 00:19:41.737 22:18:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:41.737 "params": { 00:19:41.737 "name": "Nvme1", 00:19:41.737 "trtype": "tcp", 00:19:41.737 "traddr": "10.0.0.2", 00:19:41.737 "adrfam": "ipv4", 00:19:41.737 "trsvcid": "4420", 00:19:41.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.737 "hdgst": false, 00:19:41.737 "ddgst": false 00:19:41.737 }, 00:19:41.737 "method": "bdev_nvme_attach_controller" 00:19:41.737 }' 00:19:41.737 [2024-07-24 22:18:36.722078] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:41.737 [2024-07-24 22:18:36.722123] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3581464 ] 00:19:41.737 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.737 [2024-07-24 22:18:36.778327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:41.737 [2024-07-24 22:18:36.818167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.737 [2024-07-24 22:18:36.818279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.737 [2024-07-24 22:18:36.818280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.996 [2024-07-24 22:18:37.128612] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:41.996 [2024-07-24 22:18:37.128645] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:42.289 I/O targets: 00:19:42.289 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:42.289 00:19:42.289 00:19:42.289 CUnit - A unit testing framework for C - Version 2.1-3 00:19:42.289 http://cunit.sourceforge.net/ 00:19:42.289 00:19:42.289 00:19:42.289 Suite: bdevio tests on: Nvme1n1 00:19:42.289 Test: blockdev write read block ...passed 00:19:42.289 Test: blockdev write zeroes read block ...passed 00:19:42.289 Test: blockdev write zeroes read no split ...passed 00:19:42.289 Test: blockdev write zeroes read split ...passed 00:19:42.289 Test: blockdev write zeroes read split partial ...passed 00:19:42.289 Test: blockdev reset ...[2024-07-24 22:18:37.374864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:42.289 [2024-07-24 22:18:37.374917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa46160 (9): Bad file descriptor 00:19:42.559 [2024-07-24 22:18:37.428869] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:42.559 passed 00:19:42.559 Test: blockdev write read 8 blocks ...passed 00:19:42.559 Test: blockdev write read size > 128k ...passed 00:19:42.559 Test: blockdev write read invalid size ...passed 00:19:42.559 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:42.559 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:42.559 Test: blockdev write read max offset ...passed 00:19:42.559 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:42.559 Test: blockdev writev readv 8 blocks ...passed 00:19:42.559 Test: blockdev writev readv 30 x 1block ...passed 00:19:42.559 Test: blockdev writev readv block ...passed 00:19:42.559 Test: blockdev writev readv size > 128k ...passed 00:19:42.559 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:42.559 Test: blockdev comparev and writev ...[2024-07-24 22:18:37.664350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:42.559 [2024-07-24 22:18:37.664385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:42.559 [2024-07-24 22:18:37.664399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:42.559 [2024-07-24 22:18:37.664406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:42.559 [2024-07-24 22:18:37.664972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:42.559 [2024-07-24 22:18:37.664984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:42.559 [2024-07-24 22:18:37.664995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:42.559 [2024-07-24 22:18:37.665003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:42.559 [2024-07-24 22:18:37.665594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:42.559 [2024-07-24 22:18:37.665605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:42.559 [2024-07-24 22:18:37.665617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:42.559 [2024-07-24 22:18:37.665624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:42.559 [2024-07-24 22:18:37.666128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:42.559 [2024-07-24 22:18:37.666140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:42.559 [2024-07-24 22:18:37.666151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:42.559 [2024-07-24 22:18:37.666159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:42.817 passed 00:19:42.817 Test: blockdev nvme passthru rw ...passed 00:19:42.817 Test: blockdev nvme passthru vendor specific ...[2024-07-24 22:18:37.749971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:42.817 [2024-07-24 22:18:37.749996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:42.817 [2024-07-24 22:18:37.750432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:42.817 [2024-07-24 22:18:37.750443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:42.817 [2024-07-24 22:18:37.750813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:42.817 [2024-07-24 22:18:37.750824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:42.817 [2024-07-24 22:18:37.751188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:42.817 [2024-07-24 22:18:37.751200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:42.817 passed 00:19:42.817 Test: blockdev nvme admin passthru ...passed 00:19:42.817 Test: blockdev copy ...passed 00:19:42.817 00:19:42.817 Run Summary: Type Total Ran Passed Failed Inactive 00:19:42.817 suites 1 1 n/a 0 0 00:19:42.817 tests 23 23 23 0 0 00:19:42.817 asserts 152 152 152 0 n/a 00:19:42.817 00:19:42.817 Elapsed time = 1.357 seconds 00:19:43.076 22:18:37 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:43.076 22:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.076 22:18:37 -- common/autotest_common.sh@10 -- # set +x 00:19:43.076 22:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.076 22:18:37 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:43.076 22:18:37 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:43.076 22:18:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:43.076 22:18:37 -- nvmf/common.sh@116 -- # sync 00:19:43.076 22:18:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:43.076 22:18:37 -- nvmf/common.sh@119 -- # set +e 00:19:43.076 22:18:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:43.076 22:18:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:43.076 rmmod nvme_tcp 00:19:43.076 rmmod nvme_fabrics 00:19:43.076 rmmod nvme_keyring 00:19:43.076 22:18:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:43.076 22:18:38 -- nvmf/common.sh@123 -- # set -e 00:19:43.076 22:18:38 -- nvmf/common.sh@124 -- # return 0 00:19:43.076 22:18:38 -- nvmf/common.sh@477 -- # '[' -n 3581219 ']' 00:19:43.076 22:18:38 -- nvmf/common.sh@478 -- # killprocess 3581219 00:19:43.076 22:18:38 -- common/autotest_common.sh@926 -- # '[' -z 3581219 ']' 00:19:43.076 22:18:38 -- common/autotest_common.sh@930 -- # kill -0 3581219 00:19:43.076 22:18:38 -- common/autotest_common.sh@931 -- # uname 00:19:43.076 22:18:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:43.076 22:18:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3581219 00:19:43.076 22:18:38 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:43.076 22:18:38 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:43.076 22:18:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3581219' 00:19:43.076 killing process with pid 3581219 00:19:43.076 22:18:38 -- common/autotest_common.sh@945 -- # kill 3581219 00:19:43.076 22:18:38 -- common/autotest_common.sh@950 -- # wait 3581219 00:19:43.335 22:18:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:43.335 22:18:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:43.335 22:18:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:43.335 22:18:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:43.335 22:18:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:43.335 22:18:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.335 22:18:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.335 22:18:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.237 22:18:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:45.237 00:19:45.237 real 0m10.318s 00:19:45.237 user 0m13.722s 00:19:45.237 sys 0m4.672s 00:19:45.237 22:18:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.237 22:18:40 -- common/autotest_common.sh@10 -- # set +x 00:19:45.237 ************************************ 00:19:45.238 END TEST nvmf_bdevio 00:19:45.238 ************************************ 00:19:45.495 22:18:40 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:45.495 22:18:40 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:45.495 22:18:40 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:45.495 22:18:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:45.495 22:18:40 -- common/autotest_common.sh@10 -- # set +x 00:19:45.495 ************************************ 00:19:45.495 START TEST nvmf_bdevio_no_huge 00:19:45.495 ************************************ 00:19:45.495 22:18:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:45.495 * Looking for test storage... 00:19:45.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.495 22:18:40 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.495 22:18:40 -- nvmf/common.sh@7 -- # uname -s 00:19:45.495 22:18:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.495 22:18:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.495 22:18:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.495 22:18:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.495 22:18:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.495 22:18:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.495 22:18:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.495 22:18:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.495 22:18:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.495 22:18:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.495 22:18:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:45.495 22:18:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:45.495 22:18:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.495 22:18:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.495 22:18:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.495 22:18:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.496 22:18:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.496 22:18:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.496 22:18:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.496 22:18:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.496 22:18:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.496 22:18:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.496 22:18:40 -- paths/export.sh@5 -- # export PATH 00:19:45.496 22:18:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.496 22:18:40 -- nvmf/common.sh@46 -- # : 0 00:19:45.496 22:18:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:45.496 22:18:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:45.496 22:18:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:45.496 22:18:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.496 22:18:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.496 22:18:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:45.496 22:18:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:45.496 22:18:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:45.496 22:18:40 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:45.496 22:18:40 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:45.496 22:18:40 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:45.496 22:18:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:45.496 22:18:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.496 22:18:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:45.496 22:18:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:45.496 22:18:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:45.496 22:18:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.496 22:18:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.496 22:18:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.496 22:18:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:45.496 22:18:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:45.496 22:18:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:45.496 22:18:40 -- common/autotest_common.sh@10 -- # set +x 00:19:50.762 22:18:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:50.762 22:18:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:50.762 22:18:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:50.762 22:18:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:50.762 22:18:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:50.762 22:18:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:50.762 22:18:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:50.762 22:18:45 -- nvmf/common.sh@294 -- # net_devs=() 00:19:50.762 22:18:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:50.762 22:18:45 -- nvmf/common.sh@295 -- # e810=() 00:19:50.762 22:18:45 -- nvmf/common.sh@295 -- # local -ga e810 00:19:50.762 22:18:45 -- nvmf/common.sh@296 -- # x722=() 00:19:50.762 22:18:45 -- nvmf/common.sh@296 -- # local -ga x722 00:19:50.762 22:18:45 -- nvmf/common.sh@297 -- # mlx=() 00:19:50.762 22:18:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:50.762 22:18:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.762 22:18:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:50.762 22:18:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:50.762 22:18:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:50.762 22:18:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:50.762 22:18:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:50.762 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:50.762 22:18:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:50.762 22:18:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:50.762 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:50.762 22:18:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:50.762 22:18:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:50.762 22:18:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.762 22:18:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:50.762 22:18:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.762 22:18:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:50.762 Found net devices under 0000:86:00.0: cvl_0_0 00:19:50.762 22:18:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.762 22:18:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:50.762 22:18:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.762 22:18:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:50.762 22:18:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.762 22:18:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:50.762 Found net devices under 0000:86:00.1: cvl_0_1 00:19:50.762 22:18:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.762 22:18:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:50.762 22:18:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:50.762 22:18:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:50.762 22:18:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:50.762 22:18:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.762 22:18:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.762 22:18:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.762 22:18:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:50.762 22:18:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.762 22:18:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.762 22:18:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:50.762 22:18:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.763 22:18:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.763 22:18:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:50.763 22:18:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:50.763 22:18:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.763 22:18:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.763 22:18:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.763 22:18:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.763 22:18:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:50.763 22:18:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.763 22:18:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.763 22:18:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.763 22:18:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:50.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:19:50.763 00:19:50.763 --- 10.0.0.2 ping statistics --- 00:19:50.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.763 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:19:50.763 22:18:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:19:50.763 00:19:50.763 --- 10.0.0.1 ping statistics --- 00:19:50.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.763 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:19:50.763 22:18:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.763 22:18:45 -- nvmf/common.sh@410 -- # return 0 00:19:50.763 22:18:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:50.763 22:18:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.763 22:18:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:50.763 22:18:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:50.763 22:18:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.763 22:18:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:50.763 22:18:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:50.763 22:18:45 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:50.763 22:18:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:50.763 22:18:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:50.763 22:18:45 -- common/autotest_common.sh@10 -- # set +x 00:19:50.763 22:18:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:50.763 22:18:45 -- nvmf/common.sh@469 -- # nvmfpid=3585055 00:19:50.763 22:18:45 -- nvmf/common.sh@470 -- # waitforlisten 3585055 00:19:50.763 22:18:45 -- common/autotest_common.sh@819 -- # '[' -z 3585055 ']' 00:19:50.763 22:18:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.763 22:18:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:50.763 22:18:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.763 22:18:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:50.763 22:18:45 -- common/autotest_common.sh@10 -- # set +x 00:19:50.763 [2024-07-24 22:18:45.717233] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:50.763 [2024-07-24 22:18:45.717277] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:50.763 [2024-07-24 22:18:45.776454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.763 [2024-07-24 22:18:45.840024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:50.763 [2024-07-24 22:18:45.840127] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.763 [2024-07-24 22:18:45.840135] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.763 [2024-07-24 22:18:45.840141] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.763 [2024-07-24 22:18:45.840247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:50.763 [2024-07-24 22:18:45.840353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:50.763 [2024-07-24 22:18:45.840459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:50.763 [2024-07-24 22:18:45.840457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.695 22:18:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:51.695 22:18:46 -- common/autotest_common.sh@852 -- # return 0 00:19:51.695 22:18:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:51.695 22:18:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:51.695 22:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:51.695 22:18:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.695 22:18:46 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:51.695 22:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:51.695 22:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:51.695 [2024-07-24 22:18:46.581266] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.695 22:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:51.695 22:18:46 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:51.695 22:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:51.695 22:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:51.695 Malloc0 00:19:51.695 22:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:51.695 22:18:46 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:51.695 22:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:51.695 22:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:51.695 22:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:51.695 22:18:46 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:51.695 22:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:51.695 22:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:51.695 22:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:51.695 22:18:46 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.695 22:18:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:51.695 22:18:46 -- common/autotest_common.sh@10 -- # set +x 00:19:51.695 [2024-07-24 22:18:46.617515] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.695 22:18:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:51.695 22:18:46 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:51.695 22:18:46 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:51.695 22:18:46 -- nvmf/common.sh@520 -- # config=() 00:19:51.695 22:18:46 -- nvmf/common.sh@520 -- # local subsystem config 00:19:51.695 22:18:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:51.695 22:18:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:51.695 { 00:19:51.695 "params": { 00:19:51.695 "name": "Nvme$subsystem", 00:19:51.695 "trtype": "$TEST_TRANSPORT", 00:19:51.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:51.695 "adrfam": "ipv4", 00:19:51.695 "trsvcid": "$NVMF_PORT", 00:19:51.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:51.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:51.695 "hdgst": ${hdgst:-false}, 00:19:51.695 "ddgst": ${ddgst:-false} 00:19:51.695 }, 00:19:51.695 "method": "bdev_nvme_attach_controller" 00:19:51.695 } 00:19:51.695 EOF 00:19:51.695 )") 00:19:51.695 22:18:46 -- nvmf/common.sh@542 -- # cat 00:19:51.695 22:18:46 -- nvmf/common.sh@544 -- # jq . 00:19:51.695 22:18:46 -- nvmf/common.sh@545 -- # IFS=, 00:19:51.695 22:18:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:51.695 "params": { 00:19:51.695 "name": "Nvme1", 00:19:51.695 "trtype": "tcp", 00:19:51.695 "traddr": "10.0.0.2", 00:19:51.695 "adrfam": "ipv4", 00:19:51.695 "trsvcid": "4420", 00:19:51.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.695 "hdgst": false, 00:19:51.695 "ddgst": false 00:19:51.695 }, 00:19:51.695 "method": "bdev_nvme_attach_controller" 00:19:51.695 }' 00:19:51.695 [2024-07-24 22:18:46.664349] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:19:51.695 [2024-07-24 22:18:46.664402] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3585275 ] 00:19:51.695 [2024-07-24 22:18:46.718384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:51.695 [2024-07-24 22:18:46.782550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.695 [2024-07-24 22:18:46.782648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.695 [2024-07-24 22:18:46.782648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.953 [2024-07-24 22:18:46.955739] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:51.953 [2024-07-24 22:18:46.955770] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:51.953 I/O targets: 00:19:51.953 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:51.953 00:19:51.953 00:19:51.953 CUnit - A unit testing framework for C - Version 2.1-3 00:19:51.953 http://cunit.sourceforge.net/ 00:19:51.953 00:19:51.953 00:19:51.953 Suite: bdevio tests on: Nvme1n1 00:19:51.953 Test: blockdev write read block ...passed 00:19:51.953 Test: blockdev write zeroes read block ...passed 00:19:51.953 Test: blockdev write zeroes read no split ...passed 00:19:52.211 Test: blockdev write zeroes read split ...passed 00:19:52.211 Test: blockdev write zeroes read split partial ...passed 00:19:52.211 Test: blockdev reset ...[2024-07-24 22:18:47.156408] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:52.211 [2024-07-24 22:18:47.156467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd0120 (9): Bad file descriptor 00:19:52.211 [2024-07-24 22:18:47.214191] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:52.211 passed 00:19:52.211 Test: blockdev write read 8 blocks ...passed 00:19:52.211 Test: blockdev write read size > 128k ...passed 00:19:52.211 Test: blockdev write read invalid size ...passed 00:19:52.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:52.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:52.211 Test: blockdev write read max offset ...passed 00:19:52.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:52.469 Test: blockdev writev readv 8 blocks ...passed 00:19:52.469 Test: blockdev writev readv 30 x 1block ...passed 00:19:52.469 Test: blockdev writev readv block ...passed 00:19:52.469 Test: blockdev writev readv size > 128k ...passed 00:19:52.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:52.469 Test: blockdev comparev and writev ...[2024-07-24 22:18:47.405755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.469 [2024-07-24 22:18:47.405783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.405797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.469 [2024-07-24 22:18:47.405805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.406385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.469 [2024-07-24 22:18:47.406398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.406410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.469 [2024-07-24 22:18:47.406418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.407019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.469 [2024-07-24 22:18:47.407032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.407048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.469 [2024-07-24 22:18:47.407056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.407639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.469 [2024-07-24 22:18:47.407650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.407662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:52.469 [2024-07-24 22:18:47.407669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:52.469 passed 00:19:52.469 Test: blockdev nvme passthru rw ...passed 00:19:52.469 Test: blockdev nvme passthru vendor specific ...[2024-07-24 22:18:47.491952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.469 [2024-07-24 22:18:47.491968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.492409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.469 [2024-07-24 22:18:47.492421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.492775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.469 [2024-07-24 22:18:47.492787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:52.469 [2024-07-24 22:18:47.493147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:52.469 [2024-07-24 22:18:47.493159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:52.469 passed 00:19:52.469 Test: blockdev nvme admin passthru ...passed 00:19:52.469 Test: blockdev copy ...passed 00:19:52.469 00:19:52.469 Run Summary: Type Total Ran Passed Failed Inactive 00:19:52.469 suites 1 1 n/a 0 0 00:19:52.469 tests 23 23 23 0 0 00:19:52.469 asserts 152 152 152 0 n/a 00:19:52.469 00:19:52.469 Elapsed time = 1.187 seconds 00:19:52.727 22:18:47 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.727 22:18:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.727 22:18:47 -- common/autotest_common.sh@10 -- # set +x 00:19:52.727 22:18:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.727 22:18:47 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:52.727 22:18:47 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:52.727 22:18:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:52.727 22:18:47 -- nvmf/common.sh@116 -- # sync 00:19:52.727 22:18:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:52.727 22:18:47 -- nvmf/common.sh@119 -- # set +e 00:19:52.727 22:18:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:52.727 22:18:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:52.727 rmmod nvme_tcp 00:19:52.727 rmmod nvme_fabrics 00:19:52.727 rmmod nvme_keyring 00:19:52.727 22:18:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:52.985 22:18:47 -- nvmf/common.sh@123 -- # set -e 00:19:52.985 22:18:47 -- nvmf/common.sh@124 -- # return 0 00:19:52.985 22:18:47 -- nvmf/common.sh@477 -- # '[' -n 3585055 ']' 00:19:52.985 22:18:47 -- nvmf/common.sh@478 -- # killprocess 3585055 00:19:52.985 22:18:47 -- common/autotest_common.sh@926 -- # '[' -z 3585055 ']' 00:19:52.985 22:18:47 -- common/autotest_common.sh@930 -- # kill -0 3585055 00:19:52.985 22:18:47 -- common/autotest_common.sh@931 -- # uname 00:19:52.985 22:18:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:52.985 22:18:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3585055 00:19:52.985 22:18:47 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:52.985 22:18:47 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:52.985 22:18:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3585055' 00:19:52.985 killing process with pid 3585055 00:19:52.985 22:18:47 -- common/autotest_common.sh@945 -- # kill 3585055 00:19:52.985 22:18:47 -- common/autotest_common.sh@950 -- # wait 3585055 00:19:53.243 22:18:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:53.243 22:18:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:53.243 22:18:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:53.243 22:18:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.243 22:18:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:53.243 22:18:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.243 22:18:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.243 22:18:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.145 22:18:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:55.145 00:19:55.145 real 0m9.882s 00:19:55.145 user 0m12.803s 00:19:55.145 sys 0m4.691s 00:19:55.145 22:18:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.145 22:18:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.145 ************************************ 00:19:55.145 END TEST nvmf_bdevio_no_huge 00:19:55.145 ************************************ 00:19:55.404 22:18:50 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:55.404 22:18:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:55.404 22:18:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:55.404 22:18:50 -- common/autotest_common.sh@10 -- # set +x 00:19:55.404 ************************************ 00:19:55.404 START TEST nvmf_tls 00:19:55.404 ************************************ 00:19:55.404 22:18:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:55.404 * Looking for test storage... 00:19:55.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:55.404 22:18:50 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.404 22:18:50 -- nvmf/common.sh@7 -- # uname -s 00:19:55.404 22:18:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.404 22:18:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.404 22:18:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.404 22:18:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.404 22:18:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.404 22:18:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.404 22:18:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.404 22:18:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.404 22:18:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.404 22:18:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.404 22:18:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:55.404 22:18:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:55.404 22:18:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.404 22:18:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.404 22:18:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.404 22:18:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.404 22:18:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.404 22:18:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.404 22:18:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.404 22:18:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.404 22:18:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.404 22:18:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.404 22:18:50 -- paths/export.sh@5 -- # export PATH 00:19:55.404 22:18:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.404 22:18:50 -- nvmf/common.sh@46 -- # : 0 00:19:55.404 22:18:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:55.404 22:18:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:55.404 22:18:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:55.404 22:18:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.404 22:18:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.404 22:18:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:55.404 22:18:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:55.404 22:18:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:55.404 22:18:50 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:55.404 22:18:50 -- target/tls.sh@71 -- # nvmftestinit 00:19:55.404 22:18:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:55.404 22:18:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.404 22:18:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:55.404 22:18:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:55.404 22:18:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:55.404 22:18:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.404 22:18:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.404 22:18:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.404 22:18:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:55.404 22:18:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:55.404 22:18:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:55.404 22:18:50 -- common/autotest_common.sh@10 -- # set +x 00:20:00.668 22:18:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:00.668 22:18:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:00.668 22:18:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:00.668 22:18:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:00.668 22:18:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:00.668 22:18:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:00.668 22:18:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:00.668 22:18:55 -- nvmf/common.sh@294 -- # net_devs=() 00:20:00.668 22:18:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:00.668 22:18:55 -- nvmf/common.sh@295 -- # e810=() 00:20:00.668 22:18:55 -- nvmf/common.sh@295 -- # local -ga e810 00:20:00.668 22:18:55 -- nvmf/common.sh@296 -- # x722=() 00:20:00.668 22:18:55 -- nvmf/common.sh@296 -- # local -ga x722 00:20:00.668 22:18:55 -- nvmf/common.sh@297 -- # mlx=() 00:20:00.668 22:18:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:00.668 22:18:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.668 22:18:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:00.668 22:18:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:00.668 22:18:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:00.668 22:18:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.668 22:18:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:00.668 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:00.668 22:18:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:00.668 22:18:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:00.668 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:00.668 22:18:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:00.668 22:18:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.668 22:18:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.668 22:18:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.668 22:18:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.668 22:18:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:00.668 Found net devices under 0000:86:00.0: cvl_0_0 00:20:00.668 22:18:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.668 22:18:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:00.668 22:18:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.668 22:18:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:00.668 22:18:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.668 22:18:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:00.668 Found net devices under 0000:86:00.1: cvl_0_1 00:20:00.668 22:18:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.668 22:18:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:00.668 22:18:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:00.668 22:18:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:00.668 22:18:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:00.668 22:18:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.668 22:18:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.668 22:18:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:00.668 22:18:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:00.668 22:18:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:00.668 22:18:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:00.668 22:18:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:00.668 22:18:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:00.668 22:18:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.668 22:18:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:00.668 22:18:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:00.668 22:18:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:00.668 22:18:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:00.668 22:18:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:00.668 22:18:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:00.668 22:18:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:00.668 22:18:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:00.927 22:18:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:00.927 22:18:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:00.927 22:18:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:00.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:20:00.927 00:20:00.927 --- 10.0.0.2 ping statistics --- 00:20:00.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.927 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:20:00.927 22:18:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:20:00.927 00:20:00.927 --- 10.0.0.1 ping statistics --- 00:20:00.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.927 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:20:00.927 22:18:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.927 22:18:55 -- nvmf/common.sh@410 -- # return 0 00:20:00.927 22:18:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:00.927 22:18:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.928 22:18:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:00.928 22:18:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:00.928 22:18:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.928 22:18:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:00.928 22:18:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:00.928 22:18:55 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:00.928 22:18:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:00.928 22:18:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:00.928 22:18:55 -- common/autotest_common.sh@10 -- # set +x 00:20:00.928 22:18:55 -- nvmf/common.sh@469 -- # nvmfpid=3589038 00:20:00.928 22:18:55 -- nvmf/common.sh@470 -- # waitforlisten 3589038 00:20:00.928 22:18:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:00.928 22:18:55 -- common/autotest_common.sh@819 -- # '[' -z 3589038 ']' 00:20:00.928 22:18:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.928 22:18:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:00.928 22:18:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.928 22:18:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:00.928 22:18:55 -- common/autotest_common.sh@10 -- # set +x 00:20:00.928 [2024-07-24 22:18:55.954315] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:00.928 [2024-07-24 22:18:55.954359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.928 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.928 [2024-07-24 22:18:56.013353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.928 [2024-07-24 22:18:56.051417] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:00.928 [2024-07-24 22:18:56.051526] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.928 [2024-07-24 22:18:56.051535] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.928 [2024-07-24 22:18:56.051541] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.928 [2024-07-24 22:18:56.051562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.186 22:18:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.186 22:18:56 -- common/autotest_common.sh@852 -- # return 0 00:20:01.187 22:18:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:01.187 22:18:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:01.187 22:18:56 -- common/autotest_common.sh@10 -- # set +x 00:20:01.187 22:18:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.187 22:18:56 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:20:01.187 22:18:56 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:01.187 true 00:20:01.187 22:18:56 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.187 22:18:56 -- target/tls.sh@82 -- # jq -r .tls_version 00:20:01.445 22:18:56 -- target/tls.sh@82 -- # version=0 00:20:01.445 22:18:56 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:20:01.445 22:18:56 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:01.704 22:18:56 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.704 22:18:56 -- target/tls.sh@90 -- # jq -r .tls_version 00:20:01.704 22:18:56 -- target/tls.sh@90 -- # version=13 00:20:01.704 22:18:56 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:20:01.704 22:18:56 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:01.963 22:18:56 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:01.963 22:18:56 -- target/tls.sh@98 -- # jq -r .tls_version 00:20:02.222 22:18:57 -- target/tls.sh@98 -- # version=7 00:20:02.222 22:18:57 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:20:02.222 22:18:57 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:02.222 22:18:57 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.222 22:18:57 -- target/tls.sh@105 -- # ktls=false 00:20:02.222 22:18:57 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:20:02.222 22:18:57 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:02.481 22:18:57 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.481 22:18:57 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:02.481 22:18:57 -- target/tls.sh@113 -- # ktls=true 00:20:02.481 22:18:57 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:20:02.481 22:18:57 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:02.740 22:18:57 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:02.740 22:18:57 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:20:02.999 22:18:57 -- target/tls.sh@121 -- # ktls=false 00:20:02.999 22:18:57 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:20:02.999 22:18:57 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:20:02.999 22:18:57 -- target/tls.sh@49 -- # local key hash crc 00:20:02.999 22:18:57 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:20:02.999 22:18:57 -- target/tls.sh@51 -- # hash=01 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # gzip -1 -c 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # tail -c8 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # head -c 4 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # crc='p$H�' 00:20:02.999 22:18:57 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:02.999 22:18:57 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:20:02.999 22:18:57 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:02.999 22:18:57 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:02.999 22:18:57 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:20:02.999 22:18:57 -- target/tls.sh@49 -- # local key hash crc 00:20:02.999 22:18:57 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:20:02.999 22:18:57 -- target/tls.sh@51 -- # hash=01 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # head -c 4 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # gzip -1 -c 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # tail -c8 00:20:02.999 22:18:57 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:20:02.999 22:18:57 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:02.999 22:18:57 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:20:02.999 22:18:57 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:02.999 22:18:57 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:02.999 22:18:57 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:02.999 22:18:57 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:02.999 22:18:57 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:02.999 22:18:57 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:02.999 22:18:57 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:02.999 22:18:57 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:02.999 22:18:57 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:03.258 22:18:58 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:03.258 22:18:58 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:03.258 22:18:58 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:03.258 22:18:58 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:03.516 [2024-07-24 22:18:58.542508] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.516 22:18:58 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.775 22:18:58 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.775 [2024-07-24 22:18:58.871371] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.775 [2024-07-24 22:18:58.871561] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.775 22:18:58 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:04.033 malloc0 00:20:04.033 22:18:59 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:04.291 22:18:59 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.291 22:18:59 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.291 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.556 Initializing NVMe Controllers 00:20:16.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:16.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:16.556 Initialization complete. Launching workers. 00:20:16.556 ======================================================== 00:20:16.556 Latency(us) 00:20:16.556 Device Information : IOPS MiB/s Average min max 00:20:16.556 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17124.53 66.89 3737.67 810.70 8047.47 00:20:16.556 ======================================================== 00:20:16.556 Total : 17124.53 66.89 3737.67 810.70 8047.47 00:20:16.556 00:20:16.556 22:19:09 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:16.556 22:19:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:16.556 22:19:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:16.556 22:19:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:16.556 22:19:09 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:16.556 22:19:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:16.556 22:19:09 -- target/tls.sh@28 -- # bdevperf_pid=3591215 00:20:16.556 22:19:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:16.556 22:19:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:16.556 22:19:09 -- target/tls.sh@31 -- # waitforlisten 3591215 /var/tmp/bdevperf.sock 00:20:16.556 22:19:09 -- common/autotest_common.sh@819 -- # '[' -z 3591215 ']' 00:20:16.556 22:19:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.556 22:19:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:16.556 22:19:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.556 22:19:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:16.556 22:19:09 -- common/autotest_common.sh@10 -- # set +x 00:20:16.556 [2024-07-24 22:19:09.509535] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:16.556 [2024-07-24 22:19:09.509585] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591215 ] 00:20:16.556 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.556 [2024-07-24 22:19:09.560340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.556 [2024-07-24 22:19:09.596973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.556 22:19:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:16.556 22:19:10 -- common/autotest_common.sh@852 -- # return 0 00:20:16.556 22:19:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:16.556 [2024-07-24 22:19:10.453536] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.556 TLSTESTn1 00:20:16.556 22:19:10 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:16.556 Running I/O for 10 seconds... 00:20:26.528 00:20:26.528 Latency(us) 00:20:26.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.528 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.529 Verification LBA range: start 0x0 length 0x2000 00:20:26.529 TLSTESTn1 : 10.05 1033.08 4.04 0.00 0.00 123681.24 6154.69 145888.83 00:20:26.529 =================================================================================================================== 00:20:26.529 Total : 1033.08 4.04 0.00 0.00 123681.24 6154.69 145888.83 00:20:26.529 0 00:20:26.529 22:19:20 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.529 22:19:20 -- target/tls.sh@45 -- # killprocess 3591215 00:20:26.529 22:19:20 -- common/autotest_common.sh@926 -- # '[' -z 3591215 ']' 00:20:26.529 22:19:20 -- common/autotest_common.sh@930 -- # kill -0 3591215 00:20:26.529 22:19:20 -- common/autotest_common.sh@931 -- # uname 00:20:26.529 22:19:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:26.529 22:19:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3591215 00:20:26.529 22:19:20 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:26.529 22:19:20 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:26.529 22:19:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3591215' 00:20:26.529 killing process with pid 3591215 00:20:26.529 22:19:20 -- common/autotest_common.sh@945 -- # kill 3591215 00:20:26.529 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.529 00:20:26.529 Latency(us) 00:20:26.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.529 =================================================================================================================== 00:20:26.529 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.529 22:19:20 -- common/autotest_common.sh@950 -- # wait 3591215 00:20:26.529 22:19:20 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:26.529 22:19:20 -- common/autotest_common.sh@640 -- # local es=0 00:20:26.529 22:19:20 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:26.529 22:19:20 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:26.529 22:19:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:26.529 22:19:20 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:26.529 22:19:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:26.529 22:19:20 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:26.529 22:19:20 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:26.529 22:19:20 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:26.529 22:19:20 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:26.529 22:19:20 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:26.529 22:19:20 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.529 22:19:20 -- target/tls.sh@28 -- # bdevperf_pid=3593141 00:20:26.529 22:19:20 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.529 22:19:20 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:26.529 22:19:20 -- target/tls.sh@31 -- # waitforlisten 3593141 /var/tmp/bdevperf.sock 00:20:26.529 22:19:20 -- common/autotest_common.sh@819 -- # '[' -z 3593141 ']' 00:20:26.529 22:19:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.529 22:19:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:26.529 22:19:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.529 22:19:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:26.529 22:19:20 -- common/autotest_common.sh@10 -- # set +x 00:20:26.529 [2024-07-24 22:19:21.003735] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:26.529 [2024-07-24 22:19:21.003784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593141 ] 00:20:26.529 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.529 [2024-07-24 22:19:21.057680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.529 [2024-07-24 22:19:21.095103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.787 22:19:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:26.787 22:19:21 -- common/autotest_common.sh@852 -- # return 0 00:20:26.787 22:19:21 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:27.045 [2024-07-24 22:19:21.939779] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.045 [2024-07-24 22:19:21.944753] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:27.045 [2024-07-24 22:19:21.945362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20b10 (107): Transport endpoint is not connected 00:20:27.045 [2024-07-24 22:19:21.946355] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20b10 (9): Bad file descriptor 00:20:27.045 [2024-07-24 22:19:21.947356] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:27.046 [2024-07-24 22:19:21.947365] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:27.046 [2024-07-24 22:19:21.947373] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:27.046 request: 00:20:27.046 { 00:20:27.046 "name": "TLSTEST", 00:20:27.046 "trtype": "tcp", 00:20:27.046 "traddr": "10.0.0.2", 00:20:27.046 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.046 "adrfam": "ipv4", 00:20:27.046 "trsvcid": "4420", 00:20:27.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.046 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:27.046 "method": "bdev_nvme_attach_controller", 00:20:27.046 "req_id": 1 00:20:27.046 } 00:20:27.046 Got JSON-RPC error response 00:20:27.046 response: 00:20:27.046 { 00:20:27.046 "code": -32602, 00:20:27.046 "message": "Invalid parameters" 00:20:27.046 } 00:20:27.046 22:19:21 -- target/tls.sh@36 -- # killprocess 3593141 00:20:27.046 22:19:21 -- common/autotest_common.sh@926 -- # '[' -z 3593141 ']' 00:20:27.046 22:19:21 -- common/autotest_common.sh@930 -- # kill -0 3593141 00:20:27.046 22:19:21 -- common/autotest_common.sh@931 -- # uname 00:20:27.046 22:19:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:27.046 22:19:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3593141 00:20:27.046 22:19:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:27.046 22:19:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:27.046 22:19:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3593141' 00:20:27.046 killing process with pid 3593141 00:20:27.046 22:19:22 -- common/autotest_common.sh@945 -- # kill 3593141 00:20:27.046 Received shutdown signal, test time was about 10.000000 seconds 00:20:27.046 00:20:27.046 Latency(us) 00:20:27.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:27.046 =================================================================================================================== 00:20:27.046 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:27.046 22:19:22 -- common/autotest_common.sh@950 -- # wait 3593141 00:20:27.046 22:19:22 -- target/tls.sh@37 -- # return 1 00:20:27.046 22:19:22 -- common/autotest_common.sh@643 -- # es=1 00:20:27.046 22:19:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:27.046 22:19:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:27.046 22:19:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:27.046 22:19:22 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:27.046 22:19:22 -- common/autotest_common.sh@640 -- # local es=0 00:20:27.046 22:19:22 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:27.046 22:19:22 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:27.046 22:19:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:27.046 22:19:22 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:27.046 22:19:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:27.046 22:19:22 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:27.046 22:19:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:27.046 22:19:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:27.046 22:19:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:27.046 22:19:22 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:27.046 22:19:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:27.046 22:19:22 -- target/tls.sh@28 -- # bdevperf_pid=3593318 00:20:27.046 22:19:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:27.046 22:19:22 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:27.046 22:19:22 -- target/tls.sh@31 -- # waitforlisten 3593318 /var/tmp/bdevperf.sock 00:20:27.046 22:19:22 -- common/autotest_common.sh@819 -- # '[' -z 3593318 ']' 00:20:27.046 22:19:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:27.046 22:19:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:27.046 22:19:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:27.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:27.046 22:19:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:27.046 22:19:22 -- common/autotest_common.sh@10 -- # set +x 00:20:27.304 [2024-07-24 22:19:22.220761] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:27.304 [2024-07-24 22:19:22.220813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593318 ] 00:20:27.304 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.304 [2024-07-24 22:19:22.273083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.304 [2024-07-24 22:19:22.308064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.239 22:19:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:28.239 22:19:23 -- common/autotest_common.sh@852 -- # return 0 00:20:28.239 22:19:23 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:28.239 [2024-07-24 22:19:23.165008] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.239 [2024-07-24 22:19:23.169902] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:28.239 [2024-07-24 22:19:23.169926] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:28.239 [2024-07-24 22:19:23.169956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:28.239 [2024-07-24 22:19:23.170584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffb10 (107): Transport endpoint is not connected 00:20:28.239 [2024-07-24 22:19:23.171575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ffb10 (9): Bad file descriptor 00:20:28.239 [2024-07-24 22:19:23.172576] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:28.239 [2024-07-24 22:19:23.172587] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:28.239 [2024-07-24 22:19:23.172593] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:28.239 request: 00:20:28.239 { 00:20:28.239 "name": "TLSTEST", 00:20:28.239 "trtype": "tcp", 00:20:28.239 "traddr": "10.0.0.2", 00:20:28.239 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:28.239 "adrfam": "ipv4", 00:20:28.239 "trsvcid": "4420", 00:20:28.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.239 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:28.239 "method": "bdev_nvme_attach_controller", 00:20:28.239 "req_id": 1 00:20:28.239 } 00:20:28.239 Got JSON-RPC error response 00:20:28.239 response: 00:20:28.239 { 00:20:28.239 "code": -32602, 00:20:28.239 "message": "Invalid parameters" 00:20:28.239 } 00:20:28.239 22:19:23 -- target/tls.sh@36 -- # killprocess 3593318 00:20:28.239 22:19:23 -- common/autotest_common.sh@926 -- # '[' -z 3593318 ']' 00:20:28.239 22:19:23 -- common/autotest_common.sh@930 -- # kill -0 3593318 00:20:28.239 22:19:23 -- common/autotest_common.sh@931 -- # uname 00:20:28.239 22:19:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:28.239 22:19:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3593318 00:20:28.239 22:19:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:28.239 22:19:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:28.239 22:19:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3593318' 00:20:28.239 killing process with pid 3593318 00:20:28.239 22:19:23 -- common/autotest_common.sh@945 -- # kill 3593318 00:20:28.239 Received shutdown signal, test time was about 10.000000 seconds 00:20:28.239 00:20:28.239 Latency(us) 00:20:28.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.239 =================================================================================================================== 00:20:28.239 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:28.239 22:19:23 -- common/autotest_common.sh@950 -- # wait 3593318 00:20:28.496 22:19:23 -- target/tls.sh@37 -- # return 1 00:20:28.496 22:19:23 -- common/autotest_common.sh@643 -- # es=1 00:20:28.496 22:19:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:28.496 22:19:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:28.496 22:19:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:28.496 22:19:23 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:28.496 22:19:23 -- common/autotest_common.sh@640 -- # local es=0 00:20:28.496 22:19:23 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:28.496 22:19:23 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:28.496 22:19:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:28.496 22:19:23 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:28.496 22:19:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:28.496 22:19:23 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:28.496 22:19:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:28.496 22:19:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:28.496 22:19:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:28.496 22:19:23 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:28.496 22:19:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:28.496 22:19:23 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:28.496 22:19:23 -- target/tls.sh@28 -- # bdevperf_pid=3593559 00:20:28.496 22:19:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:28.496 22:19:23 -- target/tls.sh@31 -- # waitforlisten 3593559 /var/tmp/bdevperf.sock 00:20:28.496 22:19:23 -- common/autotest_common.sh@819 -- # '[' -z 3593559 ']' 00:20:28.496 22:19:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:28.496 22:19:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:28.496 22:19:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:28.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:28.496 22:19:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:28.496 22:19:23 -- common/autotest_common.sh@10 -- # set +x 00:20:28.496 [2024-07-24 22:19:23.422015] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:28.496 [2024-07-24 22:19:23.422076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593559 ] 00:20:28.496 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.496 [2024-07-24 22:19:23.473660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.496 [2024-07-24 22:19:23.512378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.430 22:19:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:29.430 22:19:24 -- common/autotest_common.sh@852 -- # return 0 00:20:29.430 22:19:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:29.430 [2024-07-24 22:19:24.385419] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:29.430 [2024-07-24 22:19:24.390869] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:29.430 [2024-07-24 22:19:24.390892] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:29.430 [2024-07-24 22:19:24.390915] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:29.430 [2024-07-24 22:19:24.392035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ddb10 (107): Transport endpoint is not connected 00:20:29.430 [2024-07-24 22:19:24.393027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ddb10 (9): Bad file descriptor 00:20:29.430 [2024-07-24 22:19:24.394029] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:29.430 [2024-07-24 22:19:24.394039] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:29.430 [2024-07-24 22:19:24.394049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:29.430 request: 00:20:29.430 { 00:20:29.430 "name": "TLSTEST", 00:20:29.430 "trtype": "tcp", 00:20:29.430 "traddr": "10.0.0.2", 00:20:29.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.430 "adrfam": "ipv4", 00:20:29.430 "trsvcid": "4420", 00:20:29.430 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:29.430 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:29.430 "method": "bdev_nvme_attach_controller", 00:20:29.430 "req_id": 1 00:20:29.430 } 00:20:29.430 Got JSON-RPC error response 00:20:29.430 response: 00:20:29.430 { 00:20:29.430 "code": -32602, 00:20:29.430 "message": "Invalid parameters" 00:20:29.430 } 00:20:29.430 22:19:24 -- target/tls.sh@36 -- # killprocess 3593559 00:20:29.430 22:19:24 -- common/autotest_common.sh@926 -- # '[' -z 3593559 ']' 00:20:29.430 22:19:24 -- common/autotest_common.sh@930 -- # kill -0 3593559 00:20:29.430 22:19:24 -- common/autotest_common.sh@931 -- # uname 00:20:29.430 22:19:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:29.430 22:19:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3593559 00:20:29.431 22:19:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:29.431 22:19:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:29.431 22:19:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3593559' 00:20:29.431 killing process with pid 3593559 00:20:29.431 22:19:24 -- common/autotest_common.sh@945 -- # kill 3593559 00:20:29.431 Received shutdown signal, test time was about 10.000000 seconds 00:20:29.431 00:20:29.431 Latency(us) 00:20:29.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.431 =================================================================================================================== 00:20:29.431 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:29.431 22:19:24 -- common/autotest_common.sh@950 -- # wait 3593559 00:20:29.689 22:19:24 -- target/tls.sh@37 -- # return 1 00:20:29.689 22:19:24 -- common/autotest_common.sh@643 -- # es=1 00:20:29.689 22:19:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:29.689 22:19:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:29.689 22:19:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:29.689 22:19:24 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:29.689 22:19:24 -- common/autotest_common.sh@640 -- # local es=0 00:20:29.689 22:19:24 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:29.689 22:19:24 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:29.689 22:19:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:29.689 22:19:24 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:29.689 22:19:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:29.689 22:19:24 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:29.689 22:19:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:29.689 22:19:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:29.689 22:19:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:29.689 22:19:24 -- target/tls.sh@23 -- # psk= 00:20:29.689 22:19:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.689 22:19:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:29.689 22:19:24 -- target/tls.sh@28 -- # bdevperf_pid=3593798 00:20:29.689 22:19:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:29.689 22:19:24 -- target/tls.sh@31 -- # waitforlisten 3593798 /var/tmp/bdevperf.sock 00:20:29.689 22:19:24 -- common/autotest_common.sh@819 -- # '[' -z 3593798 ']' 00:20:29.689 22:19:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.689 22:19:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:29.689 22:19:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.689 22:19:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:29.689 22:19:24 -- common/autotest_common.sh@10 -- # set +x 00:20:29.689 [2024-07-24 22:19:24.654365] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:29.690 [2024-07-24 22:19:24.654414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3593798 ] 00:20:29.690 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.690 [2024-07-24 22:19:24.705100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.690 [2024-07-24 22:19:24.738912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.627 22:19:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:30.627 22:19:25 -- common/autotest_common.sh@852 -- # return 0 00:20:30.627 22:19:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:30.627 [2024-07-24 22:19:25.624054] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:30.627 [2024-07-24 22:19:25.625976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x122a110 (9): Bad file descriptor 00:20:30.627 [2024-07-24 22:19:25.626974] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:30.627 [2024-07-24 22:19:25.626986] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:30.627 [2024-07-24 22:19:25.626993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:30.627 request: 00:20:30.627 { 00:20:30.627 "name": "TLSTEST", 00:20:30.627 "trtype": "tcp", 00:20:30.627 "traddr": "10.0.0.2", 00:20:30.627 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.627 "adrfam": "ipv4", 00:20:30.627 "trsvcid": "4420", 00:20:30.627 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.627 "method": "bdev_nvme_attach_controller", 00:20:30.627 "req_id": 1 00:20:30.627 } 00:20:30.627 Got JSON-RPC error response 00:20:30.627 response: 00:20:30.627 { 00:20:30.627 "code": -32602, 00:20:30.627 "message": "Invalid parameters" 00:20:30.627 } 00:20:30.627 22:19:25 -- target/tls.sh@36 -- # killprocess 3593798 00:20:30.627 22:19:25 -- common/autotest_common.sh@926 -- # '[' -z 3593798 ']' 00:20:30.627 22:19:25 -- common/autotest_common.sh@930 -- # kill -0 3593798 00:20:30.627 22:19:25 -- common/autotest_common.sh@931 -- # uname 00:20:30.627 22:19:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:30.627 22:19:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3593798 00:20:30.627 22:19:25 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:30.627 22:19:25 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:30.627 22:19:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3593798' 00:20:30.627 killing process with pid 3593798 00:20:30.627 22:19:25 -- common/autotest_common.sh@945 -- # kill 3593798 00:20:30.627 Received shutdown signal, test time was about 10.000000 seconds 00:20:30.627 00:20:30.627 Latency(us) 00:20:30.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:30.627 =================================================================================================================== 00:20:30.627 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:30.627 22:19:25 -- common/autotest_common.sh@950 -- # wait 3593798 00:20:30.886 22:19:25 -- target/tls.sh@37 -- # return 1 00:20:30.886 22:19:25 -- common/autotest_common.sh@643 -- # es=1 00:20:30.886 22:19:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:30.886 22:19:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:30.886 22:19:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:30.886 22:19:25 -- target/tls.sh@167 -- # killprocess 3589038 00:20:30.886 22:19:25 -- common/autotest_common.sh@926 -- # '[' -z 3589038 ']' 00:20:30.886 22:19:25 -- common/autotest_common.sh@930 -- # kill -0 3589038 00:20:30.886 22:19:25 -- common/autotest_common.sh@931 -- # uname 00:20:30.886 22:19:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:30.886 22:19:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3589038 00:20:30.886 22:19:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:30.886 22:19:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:30.886 22:19:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3589038' 00:20:30.886 killing process with pid 3589038 00:20:30.886 22:19:25 -- common/autotest_common.sh@945 -- # kill 3589038 00:20:30.886 22:19:25 -- common/autotest_common.sh@950 -- # wait 3589038 00:20:31.146 22:19:26 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:31.146 22:19:26 -- target/tls.sh@49 -- # local key hash crc 00:20:31.146 22:19:26 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:31.146 22:19:26 -- target/tls.sh@51 -- # hash=02 00:20:31.146 22:19:26 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:31.146 22:19:26 -- target/tls.sh@52 -- # gzip -1 -c 00:20:31.146 22:19:26 -- target/tls.sh@52 -- # tail -c8 00:20:31.146 22:19:26 -- target/tls.sh@52 -- # head -c 4 00:20:31.146 22:19:26 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:31.146 22:19:26 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:31.146 22:19:26 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:31.146 22:19:26 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:31.146 22:19:26 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:31.146 22:19:26 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:31.146 22:19:26 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:31.146 22:19:26 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:31.146 22:19:26 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:31.146 22:19:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:31.146 22:19:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:31.146 22:19:26 -- common/autotest_common.sh@10 -- # set +x 00:20:31.146 22:19:26 -- nvmf/common.sh@469 -- # nvmfpid=3594058 00:20:31.146 22:19:26 -- nvmf/common.sh@470 -- # waitforlisten 3594058 00:20:31.146 22:19:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:31.146 22:19:26 -- common/autotest_common.sh@819 -- # '[' -z 3594058 ']' 00:20:31.146 22:19:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.146 22:19:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:31.146 22:19:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.146 22:19:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:31.146 22:19:26 -- common/autotest_common.sh@10 -- # set +x 00:20:31.146 [2024-07-24 22:19:26.147902] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:31.146 [2024-07-24 22:19:26.147948] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.146 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.146 [2024-07-24 22:19:26.205565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.147 [2024-07-24 22:19:26.239753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:31.147 [2024-07-24 22:19:26.239864] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.147 [2024-07-24 22:19:26.239872] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.147 [2024-07-24 22:19:26.239878] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.147 [2024-07-24 22:19:26.239901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.082 22:19:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:32.082 22:19:26 -- common/autotest_common.sh@852 -- # return 0 00:20:32.082 22:19:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:32.082 22:19:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:32.082 22:19:26 -- common/autotest_common.sh@10 -- # set +x 00:20:32.082 22:19:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.082 22:19:26 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:32.082 22:19:26 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:32.082 22:19:26 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:32.082 [2024-07-24 22:19:27.122383] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:32.082 22:19:27 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:32.341 22:19:27 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:32.341 [2024-07-24 22:19:27.443205] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:32.341 [2024-07-24 22:19:27.443374] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:32.341 22:19:27 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:32.599 malloc0 00:20:32.600 22:19:27 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:32.858 22:19:27 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:32.858 22:19:27 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:32.858 22:19:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:32.858 22:19:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:32.858 22:19:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:32.859 22:19:27 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:32.859 22:19:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:32.859 22:19:27 -- target/tls.sh@28 -- # bdevperf_pid=3594366 00:20:32.859 22:19:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:32.859 22:19:27 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:32.859 22:19:27 -- target/tls.sh@31 -- # waitforlisten 3594366 /var/tmp/bdevperf.sock 00:20:32.859 22:19:27 -- common/autotest_common.sh@819 -- # '[' -z 3594366 ']' 00:20:32.859 22:19:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:32.859 22:19:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:32.859 22:19:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:32.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:32.859 22:19:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:32.859 22:19:27 -- common/autotest_common.sh@10 -- # set +x 00:20:32.859 [2024-07-24 22:19:27.989170] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:32.859 [2024-07-24 22:19:27.989219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3594366 ] 00:20:33.118 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.118 [2024-07-24 22:19:28.040945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.118 [2024-07-24 22:19:28.078407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.685 22:19:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:33.685 22:19:28 -- common/autotest_common.sh@852 -- # return 0 00:20:33.685 22:19:28 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:33.944 [2024-07-24 22:19:28.934924] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:33.944 TLSTESTn1 00:20:33.944 22:19:29 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:34.202 Running I/O for 10 seconds... 00:20:44.179 00:20:44.179 Latency(us) 00:20:44.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.179 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:44.179 Verification LBA range: start 0x0 length 0x2000 00:20:44.179 TLSTESTn1 : 10.04 1349.52 5.27 0.00 0.00 94707.97 11568.53 121270.09 00:20:44.179 =================================================================================================================== 00:20:44.179 Total : 1349.52 5.27 0.00 0.00 94707.97 11568.53 121270.09 00:20:44.179 0 00:20:44.179 22:19:39 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.179 22:19:39 -- target/tls.sh@45 -- # killprocess 3594366 00:20:44.179 22:19:39 -- common/autotest_common.sh@926 -- # '[' -z 3594366 ']' 00:20:44.179 22:19:39 -- common/autotest_common.sh@930 -- # kill -0 3594366 00:20:44.179 22:19:39 -- common/autotest_common.sh@931 -- # uname 00:20:44.179 22:19:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:44.179 22:19:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3594366 00:20:44.179 22:19:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:44.179 22:19:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:44.179 22:19:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3594366' 00:20:44.179 killing process with pid 3594366 00:20:44.179 22:19:39 -- common/autotest_common.sh@945 -- # kill 3594366 00:20:44.179 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.179 00:20:44.179 Latency(us) 00:20:44.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.179 =================================================================================================================== 00:20:44.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.179 22:19:39 -- common/autotest_common.sh@950 -- # wait 3594366 00:20:44.439 22:19:39 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:44.439 22:19:39 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:44.439 22:19:39 -- common/autotest_common.sh@640 -- # local es=0 00:20:44.439 22:19:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:44.439 22:19:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:44.439 22:19:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:44.439 22:19:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:44.439 22:19:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:44.439 22:19:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:44.439 22:19:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:44.439 22:19:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:44.439 22:19:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:44.439 22:19:39 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:44.439 22:19:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:44.439 22:19:39 -- target/tls.sh@28 -- # bdevperf_pid=3596356 00:20:44.439 22:19:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.439 22:19:39 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.439 22:19:39 -- target/tls.sh@31 -- # waitforlisten 3596356 /var/tmp/bdevperf.sock 00:20:44.439 22:19:39 -- common/autotest_common.sh@819 -- # '[' -z 3596356 ']' 00:20:44.439 22:19:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.439 22:19:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:44.439 22:19:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.439 22:19:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:44.439 22:19:39 -- common/autotest_common.sh@10 -- # set +x 00:20:44.439 [2024-07-24 22:19:39.471472] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:44.439 [2024-07-24 22:19:39.471521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596356 ] 00:20:44.439 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.439 [2024-07-24 22:19:39.523005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.439 [2024-07-24 22:19:39.556413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.375 22:19:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:45.375 22:19:40 -- common/autotest_common.sh@852 -- # return 0 00:20:45.375 22:19:40 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:45.375 [2024-07-24 22:19:40.409206] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:45.375 [2024-07-24 22:19:40.409247] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:45.375 request: 00:20:45.375 { 00:20:45.375 "name": "TLSTEST", 00:20:45.375 "trtype": "tcp", 00:20:45.375 "traddr": "10.0.0.2", 00:20:45.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.375 "adrfam": "ipv4", 00:20:45.375 "trsvcid": "4420", 00:20:45.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.375 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:45.375 "method": "bdev_nvme_attach_controller", 00:20:45.375 "req_id": 1 00:20:45.375 } 00:20:45.375 Got JSON-RPC error response 00:20:45.375 response: 00:20:45.375 { 00:20:45.375 "code": -22, 00:20:45.375 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:45.375 } 00:20:45.375 22:19:40 -- target/tls.sh@36 -- # killprocess 3596356 00:20:45.375 22:19:40 -- common/autotest_common.sh@926 -- # '[' -z 3596356 ']' 00:20:45.375 22:19:40 -- common/autotest_common.sh@930 -- # kill -0 3596356 00:20:45.375 22:19:40 -- common/autotest_common.sh@931 -- # uname 00:20:45.375 22:19:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:45.375 22:19:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3596356 00:20:45.375 22:19:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:45.375 22:19:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:45.375 22:19:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3596356' 00:20:45.375 killing process with pid 3596356 00:20:45.375 22:19:40 -- common/autotest_common.sh@945 -- # kill 3596356 00:20:45.375 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.375 00:20:45.375 Latency(us) 00:20:45.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.375 =================================================================================================================== 00:20:45.375 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:45.375 22:19:40 -- common/autotest_common.sh@950 -- # wait 3596356 00:20:45.674 22:19:40 -- target/tls.sh@37 -- # return 1 00:20:45.674 22:19:40 -- common/autotest_common.sh@643 -- # es=1 00:20:45.674 22:19:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:45.674 22:19:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:45.674 22:19:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:45.674 22:19:40 -- target/tls.sh@183 -- # killprocess 3594058 00:20:45.674 22:19:40 -- common/autotest_common.sh@926 -- # '[' -z 3594058 ']' 00:20:45.674 22:19:40 -- common/autotest_common.sh@930 -- # kill -0 3594058 00:20:45.674 22:19:40 -- common/autotest_common.sh@931 -- # uname 00:20:45.674 22:19:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:45.674 22:19:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3594058 00:20:45.674 22:19:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:45.674 22:19:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:45.674 22:19:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3594058' 00:20:45.674 killing process with pid 3594058 00:20:45.674 22:19:40 -- common/autotest_common.sh@945 -- # kill 3594058 00:20:45.674 22:19:40 -- common/autotest_common.sh@950 -- # wait 3594058 00:20:45.954 22:19:40 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:45.954 22:19:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:45.954 22:19:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:45.954 22:19:40 -- common/autotest_common.sh@10 -- # set +x 00:20:45.954 22:19:40 -- nvmf/common.sh@469 -- # nvmfpid=3596626 00:20:45.954 22:19:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:45.954 22:19:40 -- nvmf/common.sh@470 -- # waitforlisten 3596626 00:20:45.954 22:19:40 -- common/autotest_common.sh@819 -- # '[' -z 3596626 ']' 00:20:45.954 22:19:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.954 22:19:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:45.954 22:19:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.954 22:19:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:45.954 22:19:40 -- common/autotest_common.sh@10 -- # set +x 00:20:45.954 [2024-07-24 22:19:40.905704] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:45.954 [2024-07-24 22:19:40.905749] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.954 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.954 [2024-07-24 22:19:40.960871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.954 [2024-07-24 22:19:40.997218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:45.954 [2024-07-24 22:19:40.997329] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.954 [2024-07-24 22:19:40.997336] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.954 [2024-07-24 22:19:40.997343] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.954 [2024-07-24 22:19:40.997360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.891 22:19:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:46.891 22:19:41 -- common/autotest_common.sh@852 -- # return 0 00:20:46.891 22:19:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:46.891 22:19:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:46.891 22:19:41 -- common/autotest_common.sh@10 -- # set +x 00:20:46.891 22:19:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.891 22:19:41 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:46.891 22:19:41 -- common/autotest_common.sh@640 -- # local es=0 00:20:46.891 22:19:41 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:46.891 22:19:41 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:46.891 22:19:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:46.891 22:19:41 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:46.891 22:19:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:46.891 22:19:41 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:46.891 22:19:41 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:46.891 22:19:41 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:46.891 [2024-07-24 22:19:41.894369] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.891 22:19:41 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:47.149 22:19:42 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:47.149 [2024-07-24 22:19:42.223244] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.149 [2024-07-24 22:19:42.223421] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.149 22:19:42 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:47.407 malloc0 00:20:47.407 22:19:42 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:47.666 22:19:42 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:47.666 [2024-07-24 22:19:42.704433] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:47.666 [2024-07-24 22:19:42.704459] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:47.666 [2024-07-24 22:19:42.704473] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:47.666 request: 00:20:47.666 { 00:20:47.666 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.666 "host": "nqn.2016-06.io.spdk:host1", 00:20:47.666 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:47.666 "method": "nvmf_subsystem_add_host", 00:20:47.666 "req_id": 1 00:20:47.666 } 00:20:47.666 Got JSON-RPC error response 00:20:47.666 response: 00:20:47.666 { 00:20:47.666 "code": -32603, 00:20:47.666 "message": "Internal error" 00:20:47.666 } 00:20:47.666 22:19:42 -- common/autotest_common.sh@643 -- # es=1 00:20:47.666 22:19:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:47.666 22:19:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:47.666 22:19:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:47.666 22:19:42 -- target/tls.sh@189 -- # killprocess 3596626 00:20:47.666 22:19:42 -- common/autotest_common.sh@926 -- # '[' -z 3596626 ']' 00:20:47.666 22:19:42 -- common/autotest_common.sh@930 -- # kill -0 3596626 00:20:47.666 22:19:42 -- common/autotest_common.sh@931 -- # uname 00:20:47.666 22:19:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:47.666 22:19:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3596626 00:20:47.666 22:19:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:47.666 22:19:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:47.666 22:19:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3596626' 00:20:47.666 killing process with pid 3596626 00:20:47.666 22:19:42 -- common/autotest_common.sh@945 -- # kill 3596626 00:20:47.666 22:19:42 -- common/autotest_common.sh@950 -- # wait 3596626 00:20:47.925 22:19:42 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:47.925 22:19:42 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:47.925 22:19:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:47.925 22:19:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:47.925 22:19:42 -- common/autotest_common.sh@10 -- # set +x 00:20:47.925 22:19:42 -- nvmf/common.sh@469 -- # nvmfpid=3596934 00:20:47.925 22:19:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:47.925 22:19:42 -- nvmf/common.sh@470 -- # waitforlisten 3596934 00:20:47.925 22:19:42 -- common/autotest_common.sh@819 -- # '[' -z 3596934 ']' 00:20:47.925 22:19:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.925 22:19:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:47.925 22:19:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.926 22:19:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:47.926 22:19:42 -- common/autotest_common.sh@10 -- # set +x 00:20:47.926 [2024-07-24 22:19:43.005193] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:47.926 [2024-07-24 22:19:43.005239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.926 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.184 [2024-07-24 22:19:43.063766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.184 [2024-07-24 22:19:43.099218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:48.184 [2024-07-24 22:19:43.099327] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.184 [2024-07-24 22:19:43.099335] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.184 [2024-07-24 22:19:43.099341] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.184 [2024-07-24 22:19:43.099363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.751 22:19:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:48.751 22:19:43 -- common/autotest_common.sh@852 -- # return 0 00:20:48.751 22:19:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:48.751 22:19:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:48.751 22:19:43 -- common/autotest_common.sh@10 -- # set +x 00:20:48.751 22:19:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.751 22:19:43 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:48.751 22:19:43 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:48.751 22:19:43 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.009 [2024-07-24 22:19:43.968691] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.009 22:19:43 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:49.267 22:19:44 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:49.267 [2024-07-24 22:19:44.293540] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.267 [2024-07-24 22:19:44.293714] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.267 22:19:44 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:49.525 malloc0 00:20:49.525 22:19:44 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:49.525 22:19:44 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:49.784 22:19:44 -- target/tls.sh@197 -- # bdevperf_pid=3597249 00:20:49.784 22:19:44 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.784 22:19:44 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.784 22:19:44 -- target/tls.sh@200 -- # waitforlisten 3597249 /var/tmp/bdevperf.sock 00:20:49.784 22:19:44 -- common/autotest_common.sh@819 -- # '[' -z 3597249 ']' 00:20:49.784 22:19:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.784 22:19:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:49.784 22:19:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.784 22:19:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:49.784 22:19:44 -- common/autotest_common.sh@10 -- # set +x 00:20:49.784 [2024-07-24 22:19:44.846285] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:49.784 [2024-07-24 22:19:44.846335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597249 ] 00:20:49.784 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.784 [2024-07-24 22:19:44.897891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.043 [2024-07-24 22:19:44.935126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.610 22:19:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:50.610 22:19:45 -- common/autotest_common.sh@852 -- # return 0 00:20:50.610 22:19:45 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:50.868 [2024-07-24 22:19:45.779594] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:50.868 TLSTESTn1 00:20:50.868 22:19:45 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:51.128 22:19:46 -- target/tls.sh@205 -- # tgtconf='{ 00:20:51.128 "subsystems": [ 00:20:51.128 { 00:20:51.128 "subsystem": "iobuf", 00:20:51.128 "config": [ 00:20:51.128 { 00:20:51.128 "method": "iobuf_set_options", 00:20:51.128 "params": { 00:20:51.128 "small_pool_count": 8192, 00:20:51.128 "large_pool_count": 1024, 00:20:51.128 "small_bufsize": 8192, 00:20:51.128 "large_bufsize": 135168 00:20:51.128 } 00:20:51.128 } 00:20:51.128 ] 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "subsystem": "sock", 00:20:51.128 "config": [ 00:20:51.128 { 00:20:51.128 "method": "sock_impl_set_options", 00:20:51.128 "params": { 00:20:51.128 "impl_name": "posix", 00:20:51.128 "recv_buf_size": 2097152, 00:20:51.128 "send_buf_size": 2097152, 00:20:51.128 "enable_recv_pipe": true, 00:20:51.128 "enable_quickack": false, 00:20:51.128 "enable_placement_id": 0, 00:20:51.128 "enable_zerocopy_send_server": true, 00:20:51.128 "enable_zerocopy_send_client": false, 00:20:51.128 "zerocopy_threshold": 0, 00:20:51.128 "tls_version": 0, 00:20:51.128 "enable_ktls": false 00:20:51.128 } 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "method": "sock_impl_set_options", 00:20:51.128 "params": { 00:20:51.128 "impl_name": "ssl", 00:20:51.128 "recv_buf_size": 4096, 00:20:51.128 "send_buf_size": 4096, 00:20:51.128 "enable_recv_pipe": true, 00:20:51.128 "enable_quickack": false, 00:20:51.128 "enable_placement_id": 0, 00:20:51.128 "enable_zerocopy_send_server": true, 00:20:51.128 "enable_zerocopy_send_client": false, 00:20:51.128 "zerocopy_threshold": 0, 00:20:51.128 "tls_version": 0, 00:20:51.128 "enable_ktls": false 00:20:51.128 } 00:20:51.128 } 00:20:51.128 ] 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "subsystem": "vmd", 00:20:51.128 "config": [] 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "subsystem": "accel", 00:20:51.128 "config": [ 00:20:51.128 { 00:20:51.128 "method": "accel_set_options", 00:20:51.128 "params": { 00:20:51.128 "small_cache_size": 128, 00:20:51.128 "large_cache_size": 16, 00:20:51.128 "task_count": 2048, 00:20:51.128 "sequence_count": 2048, 00:20:51.128 "buf_count": 2048 00:20:51.128 } 00:20:51.128 } 00:20:51.128 ] 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "subsystem": "bdev", 00:20:51.128 "config": [ 00:20:51.128 { 00:20:51.128 "method": "bdev_set_options", 00:20:51.128 "params": { 00:20:51.128 "bdev_io_pool_size": 65535, 00:20:51.128 "bdev_io_cache_size": 256, 00:20:51.128 "bdev_auto_examine": true, 00:20:51.128 "iobuf_small_cache_size": 128, 00:20:51.128 "iobuf_large_cache_size": 16 00:20:51.128 } 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "method": "bdev_raid_set_options", 00:20:51.128 "params": { 00:20:51.128 "process_window_size_kb": 1024 00:20:51.128 } 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "method": "bdev_iscsi_set_options", 00:20:51.128 "params": { 00:20:51.128 "timeout_sec": 30 00:20:51.128 } 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "method": "bdev_nvme_set_options", 00:20:51.128 "params": { 00:20:51.128 "action_on_timeout": "none", 00:20:51.128 "timeout_us": 0, 00:20:51.128 "timeout_admin_us": 0, 00:20:51.128 "keep_alive_timeout_ms": 10000, 00:20:51.128 "transport_retry_count": 4, 00:20:51.128 "arbitration_burst": 0, 00:20:51.128 "low_priority_weight": 0, 00:20:51.128 "medium_priority_weight": 0, 00:20:51.128 "high_priority_weight": 0, 00:20:51.128 "nvme_adminq_poll_period_us": 10000, 00:20:51.128 "nvme_ioq_poll_period_us": 0, 00:20:51.128 "io_queue_requests": 0, 00:20:51.128 "delay_cmd_submit": true, 00:20:51.128 "bdev_retry_count": 3, 00:20:51.128 "transport_ack_timeout": 0, 00:20:51.128 "ctrlr_loss_timeout_sec": 0, 00:20:51.128 "reconnect_delay_sec": 0, 00:20:51.128 "fast_io_fail_timeout_sec": 0, 00:20:51.128 "generate_uuids": false, 00:20:51.128 "transport_tos": 0, 00:20:51.128 "io_path_stat": false, 00:20:51.128 "allow_accel_sequence": false 00:20:51.128 } 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "method": "bdev_nvme_set_hotplug", 00:20:51.128 "params": { 00:20:51.128 "period_us": 100000, 00:20:51.128 "enable": false 00:20:51.128 } 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "method": "bdev_malloc_create", 00:20:51.128 "params": { 00:20:51.128 "name": "malloc0", 00:20:51.128 "num_blocks": 8192, 00:20:51.128 "block_size": 4096, 00:20:51.128 "physical_block_size": 4096, 00:20:51.128 "uuid": "985aecbe-3184-434d-802c-da58702929f7", 00:20:51.128 "optimal_io_boundary": 0 00:20:51.128 } 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "method": "bdev_wait_for_examine" 00:20:51.128 } 00:20:51.128 ] 00:20:51.128 }, 00:20:51.128 { 00:20:51.128 "subsystem": "nbd", 00:20:51.128 "config": [] 00:20:51.128 }, 00:20:51.128 { 00:20:51.129 "subsystem": "scheduler", 00:20:51.129 "config": [ 00:20:51.129 { 00:20:51.129 "method": "framework_set_scheduler", 00:20:51.129 "params": { 00:20:51.129 "name": "static" 00:20:51.129 } 00:20:51.129 } 00:20:51.129 ] 00:20:51.129 }, 00:20:51.129 { 00:20:51.129 "subsystem": "nvmf", 00:20:51.129 "config": [ 00:20:51.129 { 00:20:51.129 "method": "nvmf_set_config", 00:20:51.129 "params": { 00:20:51.129 "discovery_filter": "match_any", 00:20:51.129 "admin_cmd_passthru": { 00:20:51.129 "identify_ctrlr": false 00:20:51.129 } 00:20:51.129 } 00:20:51.129 }, 00:20:51.129 { 00:20:51.129 "method": "nvmf_set_max_subsystems", 00:20:51.129 "params": { 00:20:51.129 "max_subsystems": 1024 00:20:51.129 } 00:20:51.129 }, 00:20:51.129 { 00:20:51.129 "method": "nvmf_set_crdt", 00:20:51.129 "params": { 00:20:51.129 "crdt1": 0, 00:20:51.129 "crdt2": 0, 00:20:51.129 "crdt3": 0 00:20:51.129 } 00:20:51.129 }, 00:20:51.129 { 00:20:51.129 "method": "nvmf_create_transport", 00:20:51.129 "params": { 00:20:51.129 "trtype": "TCP", 00:20:51.129 "max_queue_depth": 128, 00:20:51.129 "max_io_qpairs_per_ctrlr": 127, 00:20:51.129 "in_capsule_data_size": 4096, 00:20:51.129 "max_io_size": 131072, 00:20:51.129 "io_unit_size": 131072, 00:20:51.129 "max_aq_depth": 128, 00:20:51.129 "num_shared_buffers": 511, 00:20:51.129 "buf_cache_size": 4294967295, 00:20:51.129 "dif_insert_or_strip": false, 00:20:51.129 "zcopy": false, 00:20:51.129 "c2h_success": false, 00:20:51.129 "sock_priority": 0, 00:20:51.129 "abort_timeout_sec": 1 00:20:51.129 } 00:20:51.129 }, 00:20:51.129 { 00:20:51.129 "method": "nvmf_create_subsystem", 00:20:51.129 "params": { 00:20:51.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.129 "allow_any_host": false, 00:20:51.129 "serial_number": "SPDK00000000000001", 00:20:51.129 "model_number": "SPDK bdev Controller", 00:20:51.129 "max_namespaces": 10, 00:20:51.129 "min_cntlid": 1, 00:20:51.129 "max_cntlid": 65519, 00:20:51.129 "ana_reporting": false 00:20:51.129 } 00:20:51.129 }, 00:20:51.129 { 00:20:51.129 "method": "nvmf_subsystem_add_host", 00:20:51.129 "params": { 00:20:51.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.129 "host": "nqn.2016-06.io.spdk:host1", 00:20:51.129 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:51.129 } 00:20:51.129 }, 00:20:51.129 { 00:20:51.129 "method": "nvmf_subsystem_add_ns", 00:20:51.129 "params": { 00:20:51.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.129 "namespace": { 00:20:51.129 "nsid": 1, 00:20:51.129 "bdev_name": "malloc0", 00:20:51.129 "nguid": "985AECBE3184434D802CDA58702929F7", 00:20:51.129 "uuid": "985aecbe-3184-434d-802c-da58702929f7" 00:20:51.129 } 00:20:51.129 } 00:20:51.129 }, 00:20:51.129 { 00:20:51.129 "method": "nvmf_subsystem_add_listener", 00:20:51.129 "params": { 00:20:51.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.129 "listen_address": { 00:20:51.129 "trtype": "TCP", 00:20:51.129 "adrfam": "IPv4", 00:20:51.129 "traddr": "10.0.0.2", 00:20:51.129 "trsvcid": "4420" 00:20:51.129 }, 00:20:51.129 "secure_channel": true 00:20:51.129 } 00:20:51.129 } 00:20:51.129 ] 00:20:51.129 } 00:20:51.129 ] 00:20:51.129 }' 00:20:51.129 22:19:46 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:51.389 22:19:46 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:51.389 "subsystems": [ 00:20:51.389 { 00:20:51.389 "subsystem": "iobuf", 00:20:51.389 "config": [ 00:20:51.389 { 00:20:51.389 "method": "iobuf_set_options", 00:20:51.389 "params": { 00:20:51.389 "small_pool_count": 8192, 00:20:51.389 "large_pool_count": 1024, 00:20:51.389 "small_bufsize": 8192, 00:20:51.389 "large_bufsize": 135168 00:20:51.389 } 00:20:51.389 } 00:20:51.389 ] 00:20:51.389 }, 00:20:51.389 { 00:20:51.389 "subsystem": "sock", 00:20:51.389 "config": [ 00:20:51.389 { 00:20:51.389 "method": "sock_impl_set_options", 00:20:51.389 "params": { 00:20:51.389 "impl_name": "posix", 00:20:51.389 "recv_buf_size": 2097152, 00:20:51.389 "send_buf_size": 2097152, 00:20:51.389 "enable_recv_pipe": true, 00:20:51.389 "enable_quickack": false, 00:20:51.389 "enable_placement_id": 0, 00:20:51.389 "enable_zerocopy_send_server": true, 00:20:51.389 "enable_zerocopy_send_client": false, 00:20:51.389 "zerocopy_threshold": 0, 00:20:51.389 "tls_version": 0, 00:20:51.389 "enable_ktls": false 00:20:51.389 } 00:20:51.389 }, 00:20:51.389 { 00:20:51.389 "method": "sock_impl_set_options", 00:20:51.389 "params": { 00:20:51.389 "impl_name": "ssl", 00:20:51.389 "recv_buf_size": 4096, 00:20:51.389 "send_buf_size": 4096, 00:20:51.389 "enable_recv_pipe": true, 00:20:51.389 "enable_quickack": false, 00:20:51.389 "enable_placement_id": 0, 00:20:51.389 "enable_zerocopy_send_server": true, 00:20:51.389 "enable_zerocopy_send_client": false, 00:20:51.389 "zerocopy_threshold": 0, 00:20:51.389 "tls_version": 0, 00:20:51.389 "enable_ktls": false 00:20:51.389 } 00:20:51.389 } 00:20:51.389 ] 00:20:51.389 }, 00:20:51.389 { 00:20:51.389 "subsystem": "vmd", 00:20:51.389 "config": [] 00:20:51.389 }, 00:20:51.389 { 00:20:51.389 "subsystem": "accel", 00:20:51.389 "config": [ 00:20:51.389 { 00:20:51.389 "method": "accel_set_options", 00:20:51.389 "params": { 00:20:51.389 "small_cache_size": 128, 00:20:51.389 "large_cache_size": 16, 00:20:51.389 "task_count": 2048, 00:20:51.389 "sequence_count": 2048, 00:20:51.389 "buf_count": 2048 00:20:51.389 } 00:20:51.389 } 00:20:51.389 ] 00:20:51.389 }, 00:20:51.389 { 00:20:51.389 "subsystem": "bdev", 00:20:51.389 "config": [ 00:20:51.389 { 00:20:51.389 "method": "bdev_set_options", 00:20:51.389 "params": { 00:20:51.389 "bdev_io_pool_size": 65535, 00:20:51.389 "bdev_io_cache_size": 256, 00:20:51.389 "bdev_auto_examine": true, 00:20:51.389 "iobuf_small_cache_size": 128, 00:20:51.389 "iobuf_large_cache_size": 16 00:20:51.389 } 00:20:51.389 }, 00:20:51.389 { 00:20:51.389 "method": "bdev_raid_set_options", 00:20:51.389 "params": { 00:20:51.389 "process_window_size_kb": 1024 00:20:51.389 } 00:20:51.389 }, 00:20:51.389 { 00:20:51.389 "method": "bdev_iscsi_set_options", 00:20:51.389 "params": { 00:20:51.389 "timeout_sec": 30 00:20:51.389 } 00:20:51.389 }, 00:20:51.389 { 00:20:51.390 "method": "bdev_nvme_set_options", 00:20:51.390 "params": { 00:20:51.390 "action_on_timeout": "none", 00:20:51.390 "timeout_us": 0, 00:20:51.390 "timeout_admin_us": 0, 00:20:51.390 "keep_alive_timeout_ms": 10000, 00:20:51.390 "transport_retry_count": 4, 00:20:51.390 "arbitration_burst": 0, 00:20:51.390 "low_priority_weight": 0, 00:20:51.390 "medium_priority_weight": 0, 00:20:51.390 "high_priority_weight": 0, 00:20:51.390 "nvme_adminq_poll_period_us": 10000, 00:20:51.390 "nvme_ioq_poll_period_us": 0, 00:20:51.390 "io_queue_requests": 512, 00:20:51.390 "delay_cmd_submit": true, 00:20:51.390 "bdev_retry_count": 3, 00:20:51.390 "transport_ack_timeout": 0, 00:20:51.390 "ctrlr_loss_timeout_sec": 0, 00:20:51.390 "reconnect_delay_sec": 0, 00:20:51.390 "fast_io_fail_timeout_sec": 0, 00:20:51.390 "generate_uuids": false, 00:20:51.390 "transport_tos": 0, 00:20:51.390 "io_path_stat": false, 00:20:51.390 "allow_accel_sequence": false 00:20:51.390 } 00:20:51.390 }, 00:20:51.390 { 00:20:51.390 "method": "bdev_nvme_attach_controller", 00:20:51.390 "params": { 00:20:51.390 "name": "TLSTEST", 00:20:51.390 "trtype": "TCP", 00:20:51.390 "adrfam": "IPv4", 00:20:51.390 "traddr": "10.0.0.2", 00:20:51.390 "trsvcid": "4420", 00:20:51.390 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.390 "prchk_reftag": false, 00:20:51.390 "prchk_guard": false, 00:20:51.390 "ctrlr_loss_timeout_sec": 0, 00:20:51.390 "reconnect_delay_sec": 0, 00:20:51.390 "fast_io_fail_timeout_sec": 0, 00:20:51.390 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:51.390 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.390 "hdgst": false, 00:20:51.390 "ddgst": false 00:20:51.390 } 00:20:51.390 }, 00:20:51.390 { 00:20:51.390 "method": "bdev_nvme_set_hotplug", 00:20:51.390 "params": { 00:20:51.390 "period_us": 100000, 00:20:51.390 "enable": false 00:20:51.390 } 00:20:51.390 }, 00:20:51.390 { 00:20:51.390 "method": "bdev_wait_for_examine" 00:20:51.390 } 00:20:51.390 ] 00:20:51.390 }, 00:20:51.390 { 00:20:51.390 "subsystem": "nbd", 00:20:51.390 "config": [] 00:20:51.390 } 00:20:51.390 ] 00:20:51.390 }' 00:20:51.390 22:19:46 -- target/tls.sh@208 -- # killprocess 3597249 00:20:51.390 22:19:46 -- common/autotest_common.sh@926 -- # '[' -z 3597249 ']' 00:20:51.390 22:19:46 -- common/autotest_common.sh@930 -- # kill -0 3597249 00:20:51.390 22:19:46 -- common/autotest_common.sh@931 -- # uname 00:20:51.390 22:19:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:51.390 22:19:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3597249 00:20:51.390 22:19:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:51.390 22:19:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:51.390 22:19:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3597249' 00:20:51.390 killing process with pid 3597249 00:20:51.390 22:19:46 -- common/autotest_common.sh@945 -- # kill 3597249 00:20:51.390 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.390 00:20:51.390 Latency(us) 00:20:51.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.390 =================================================================================================================== 00:20:51.390 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.390 22:19:46 -- common/autotest_common.sh@950 -- # wait 3597249 00:20:51.649 22:19:46 -- target/tls.sh@209 -- # killprocess 3596934 00:20:51.649 22:19:46 -- common/autotest_common.sh@926 -- # '[' -z 3596934 ']' 00:20:51.649 22:19:46 -- common/autotest_common.sh@930 -- # kill -0 3596934 00:20:51.649 22:19:46 -- common/autotest_common.sh@931 -- # uname 00:20:51.650 22:19:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:51.650 22:19:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3596934 00:20:51.650 22:19:46 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:51.650 22:19:46 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:51.650 22:19:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3596934' 00:20:51.650 killing process with pid 3596934 00:20:51.650 22:19:46 -- common/autotest_common.sh@945 -- # kill 3596934 00:20:51.650 22:19:46 -- common/autotest_common.sh@950 -- # wait 3596934 00:20:51.909 22:19:46 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:51.909 22:19:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:51.909 22:19:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:51.909 22:19:46 -- target/tls.sh@212 -- # echo '{ 00:20:51.909 "subsystems": [ 00:20:51.909 { 00:20:51.909 "subsystem": "iobuf", 00:20:51.909 "config": [ 00:20:51.909 { 00:20:51.909 "method": "iobuf_set_options", 00:20:51.909 "params": { 00:20:51.909 "small_pool_count": 8192, 00:20:51.909 "large_pool_count": 1024, 00:20:51.909 "small_bufsize": 8192, 00:20:51.909 "large_bufsize": 135168 00:20:51.909 } 00:20:51.909 } 00:20:51.909 ] 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "subsystem": "sock", 00:20:51.909 "config": [ 00:20:51.909 { 00:20:51.909 "method": "sock_impl_set_options", 00:20:51.909 "params": { 00:20:51.909 "impl_name": "posix", 00:20:51.909 "recv_buf_size": 2097152, 00:20:51.909 "send_buf_size": 2097152, 00:20:51.909 "enable_recv_pipe": true, 00:20:51.909 "enable_quickack": false, 00:20:51.909 "enable_placement_id": 0, 00:20:51.909 "enable_zerocopy_send_server": true, 00:20:51.909 "enable_zerocopy_send_client": false, 00:20:51.909 "zerocopy_threshold": 0, 00:20:51.909 "tls_version": 0, 00:20:51.909 "enable_ktls": false 00:20:51.909 } 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "method": "sock_impl_set_options", 00:20:51.909 "params": { 00:20:51.909 "impl_name": "ssl", 00:20:51.909 "recv_buf_size": 4096, 00:20:51.909 "send_buf_size": 4096, 00:20:51.909 "enable_recv_pipe": true, 00:20:51.909 "enable_quickack": false, 00:20:51.909 "enable_placement_id": 0, 00:20:51.909 "enable_zerocopy_send_server": true, 00:20:51.909 "enable_zerocopy_send_client": false, 00:20:51.909 "zerocopy_threshold": 0, 00:20:51.909 "tls_version": 0, 00:20:51.909 "enable_ktls": false 00:20:51.909 } 00:20:51.909 } 00:20:51.909 ] 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "subsystem": "vmd", 00:20:51.909 "config": [] 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "subsystem": "accel", 00:20:51.909 "config": [ 00:20:51.909 { 00:20:51.909 "method": "accel_set_options", 00:20:51.909 "params": { 00:20:51.909 "small_cache_size": 128, 00:20:51.909 "large_cache_size": 16, 00:20:51.909 "task_count": 2048, 00:20:51.909 "sequence_count": 2048, 00:20:51.909 "buf_count": 2048 00:20:51.909 } 00:20:51.909 } 00:20:51.909 ] 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "subsystem": "bdev", 00:20:51.909 "config": [ 00:20:51.909 { 00:20:51.909 "method": "bdev_set_options", 00:20:51.909 "params": { 00:20:51.909 "bdev_io_pool_size": 65535, 00:20:51.909 "bdev_io_cache_size": 256, 00:20:51.909 "bdev_auto_examine": true, 00:20:51.909 "iobuf_small_cache_size": 128, 00:20:51.909 "iobuf_large_cache_size": 16 00:20:51.909 } 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "method": "bdev_raid_set_options", 00:20:51.909 "params": { 00:20:51.909 "process_window_size_kb": 1024 00:20:51.909 } 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "method": "bdev_iscsi_set_options", 00:20:51.909 "params": { 00:20:51.909 "timeout_sec": 30 00:20:51.909 } 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "method": "bdev_nvme_set_options", 00:20:51.909 "params": { 00:20:51.909 "action_on_timeout": "none", 00:20:51.909 "timeout_us": 0, 00:20:51.909 "timeout_admin_us": 0, 00:20:51.909 "keep_alive_timeout_ms": 10000, 00:20:51.909 "transport_retry_count": 4, 00:20:51.909 "arbitration_burst": 0, 00:20:51.909 "low_priority_weight": 0, 00:20:51.909 "medium_priority_weight": 0, 00:20:51.909 "high_priority_weight": 0, 00:20:51.909 "nvme_adminq_poll_period_us": 10000, 00:20:51.909 "nvme_ioq_poll_period_us": 0, 00:20:51.909 "io_queue_requests": 0, 00:20:51.909 "delay_cmd_submit": true, 00:20:51.909 "bdev_retry_count": 3, 00:20:51.909 "transport_ack_timeout": 0, 00:20:51.909 "ctrlr_loss_timeout_sec": 0, 00:20:51.909 "reconnect_delay_sec": 0, 00:20:51.909 "fast_io_fail_timeout_sec": 0, 00:20:51.909 "generate_uuids": false, 00:20:51.909 "transport_tos": 0, 00:20:51.909 "io_path_stat": false, 00:20:51.909 "allow_accel_sequence": false 00:20:51.909 } 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "method": "bdev_nvme_set_hotplug", 00:20:51.909 "params": { 00:20:51.909 "period_us": 100000, 00:20:51.909 "enable": false 00:20:51.909 } 00:20:51.909 }, 00:20:51.909 { 00:20:51.909 "method": "bdev_malloc_create", 00:20:51.909 "params": { 00:20:51.909 "name": "malloc0", 00:20:51.909 "num_blocks": 8192, 00:20:51.909 "block_size": 4096, 00:20:51.909 "physical_block_size": 4096, 00:20:51.910 "uuid": "985aecbe-3184-434d-802c-da58702929f7", 00:20:51.910 "optimal_io_boundary": 0 00:20:51.910 } 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "method": "bdev_wait_for_examine" 00:20:51.910 } 00:20:51.910 ] 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "subsystem": "nbd", 00:20:51.910 "config": [] 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "subsystem": "scheduler", 00:20:51.910 "config": [ 00:20:51.910 { 00:20:51.910 "method": "framework_set_scheduler", 00:20:51.910 "params": { 00:20:51.910 "name": "static" 00:20:51.910 } 00:20:51.910 } 00:20:51.910 ] 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "subsystem": "nvmf", 00:20:51.910 "config": [ 00:20:51.910 { 00:20:51.910 "method": "nvmf_set_config", 00:20:51.910 "params": { 00:20:51.910 "discovery_filter": "match_any", 00:20:51.910 "admin_cmd_passthru": { 00:20:51.910 "identify_ctrlr": false 00:20:51.910 } 00:20:51.910 } 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "method": "nvmf_set_max_subsystems", 00:20:51.910 "params": { 00:20:51.910 "max_subsystems": 1024 00:20:51.910 } 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "method": "nvmf_set_crdt", 00:20:51.910 "params": { 00:20:51.910 "crdt1": 0, 00:20:51.910 "crdt2": 0, 00:20:51.910 "crdt3": 0 00:20:51.910 } 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "method": "nvmf_create_transport", 00:20:51.910 "params": { 00:20:51.910 "trtype": "TCP", 00:20:51.910 "max_queue_depth": 128, 00:20:51.910 "max_io_qpairs_per_ctrlr": 127, 00:20:51.910 "in_capsule_data_size": 4096, 00:20:51.910 "max_io_size": 131072, 00:20:51.910 "io_unit_size": 131072, 00:20:51.910 "max_aq_depth": 128, 00:20:51.910 "num_shared_buffers": 511, 00:20:51.910 "buf_cache_size": 4294967295, 00:20:51.910 "dif_insert_or_strip": false, 00:20:51.910 "zcopy": false, 00:20:51.910 "c2h_success": false, 00:20:51.910 "sock_priority": 0, 00:20:51.910 "abort_timeout_sec": 1 00:20:51.910 } 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "method": "nvmf_create_subsystem", 00:20:51.910 "params": { 00:20:51.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.910 "allow_any_host": false, 00:20:51.910 "serial_number": "SPDK00000000000001", 00:20:51.910 "model_number": "SPDK bdev Controller", 00:20:51.910 "max_namespaces": 10, 00:20:51.910 "min_cntlid": 1, 00:20:51.910 "max_cntlid": 65519, 00:20:51.910 "ana_reporting": false 00:20:51.910 } 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "method": "nvmf_subsystem_add_host", 00:20:51.910 "params": { 00:20:51.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.910 "host": "nqn.2016-06.io.spdk:host1", 00:20:51.910 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:51.910 } 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "method": "nvmf_subsystem_add_ns", 00:20:51.910 "params": { 00:20:51.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.910 "namespace": { 00:20:51.910 "nsid": 1, 00:20:51.910 "bdev_name": "malloc0", 00:20:51.910 "nguid": "985AECBE3184434D802CDA58702929F7", 00:20:51.910 "uuid": "985aecbe-3184-434d-802c-da58702929f7" 00:20:51.910 } 00:20:51.910 } 00:20:51.910 }, 00:20:51.910 { 00:20:51.910 "method": "nvmf_subsystem_add_listener", 00:20:51.910 "params": { 00:20:51.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.910 "listen_address": { 00:20:51.910 "trtype": "TCP", 00:20:51.910 "adrfam": "IPv4", 00:20:51.910 "traddr": "10.0.0.2", 00:20:51.910 "trsvcid": "4420" 00:20:51.910 }, 00:20:51.910 "secure_channel": true 00:20:51.910 } 00:20:51.910 } 00:20:51.910 ] 00:20:51.910 } 00:20:51.910 ] 00:20:51.910 }' 00:20:51.910 22:19:46 -- common/autotest_common.sh@10 -- # set +x 00:20:51.910 22:19:46 -- nvmf/common.sh@469 -- # nvmfpid=3597672 00:20:51.910 22:19:46 -- nvmf/common.sh@470 -- # waitforlisten 3597672 00:20:51.910 22:19:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:51.910 22:19:46 -- common/autotest_common.sh@819 -- # '[' -z 3597672 ']' 00:20:51.910 22:19:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.910 22:19:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:51.910 22:19:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.910 22:19:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:51.910 22:19:46 -- common/autotest_common.sh@10 -- # set +x 00:20:51.910 [2024-07-24 22:19:46.860183] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:51.910 [2024-07-24 22:19:46.860230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.910 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.910 [2024-07-24 22:19:46.915963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.910 [2024-07-24 22:19:46.954213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:51.910 [2024-07-24 22:19:46.954323] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.910 [2024-07-24 22:19:46.954331] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.910 [2024-07-24 22:19:46.954337] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.910 [2024-07-24 22:19:46.954353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.167 [2024-07-24 22:19:47.143724] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.167 [2024-07-24 22:19:47.175768] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:52.167 [2024-07-24 22:19:47.175937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.735 22:19:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:52.735 22:19:47 -- common/autotest_common.sh@852 -- # return 0 00:20:52.735 22:19:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:52.735 22:19:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:52.735 22:19:47 -- common/autotest_common.sh@10 -- # set +x 00:20:52.735 22:19:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.735 22:19:47 -- target/tls.sh@216 -- # bdevperf_pid=3597803 00:20:52.735 22:19:47 -- target/tls.sh@217 -- # waitforlisten 3597803 /var/tmp/bdevperf.sock 00:20:52.735 22:19:47 -- common/autotest_common.sh@819 -- # '[' -z 3597803 ']' 00:20:52.735 22:19:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.735 22:19:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:52.735 22:19:47 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:52.735 22:19:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.735 22:19:47 -- target/tls.sh@213 -- # echo '{ 00:20:52.735 "subsystems": [ 00:20:52.735 { 00:20:52.735 "subsystem": "iobuf", 00:20:52.735 "config": [ 00:20:52.735 { 00:20:52.735 "method": "iobuf_set_options", 00:20:52.735 "params": { 00:20:52.735 "small_pool_count": 8192, 00:20:52.735 "large_pool_count": 1024, 00:20:52.735 "small_bufsize": 8192, 00:20:52.735 "large_bufsize": 135168 00:20:52.735 } 00:20:52.735 } 00:20:52.735 ] 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "subsystem": "sock", 00:20:52.735 "config": [ 00:20:52.735 { 00:20:52.735 "method": "sock_impl_set_options", 00:20:52.735 "params": { 00:20:52.735 "impl_name": "posix", 00:20:52.735 "recv_buf_size": 2097152, 00:20:52.735 "send_buf_size": 2097152, 00:20:52.735 "enable_recv_pipe": true, 00:20:52.735 "enable_quickack": false, 00:20:52.735 "enable_placement_id": 0, 00:20:52.735 "enable_zerocopy_send_server": true, 00:20:52.735 "enable_zerocopy_send_client": false, 00:20:52.735 "zerocopy_threshold": 0, 00:20:52.735 "tls_version": 0, 00:20:52.735 "enable_ktls": false 00:20:52.735 } 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "method": "sock_impl_set_options", 00:20:52.735 "params": { 00:20:52.735 "impl_name": "ssl", 00:20:52.735 "recv_buf_size": 4096, 00:20:52.735 "send_buf_size": 4096, 00:20:52.735 "enable_recv_pipe": true, 00:20:52.735 "enable_quickack": false, 00:20:52.735 "enable_placement_id": 0, 00:20:52.735 "enable_zerocopy_send_server": true, 00:20:52.735 "enable_zerocopy_send_client": false, 00:20:52.735 "zerocopy_threshold": 0, 00:20:52.735 "tls_version": 0, 00:20:52.735 "enable_ktls": false 00:20:52.735 } 00:20:52.735 } 00:20:52.735 ] 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "subsystem": "vmd", 00:20:52.735 "config": [] 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "subsystem": "accel", 00:20:52.735 "config": [ 00:20:52.735 { 00:20:52.735 "method": "accel_set_options", 00:20:52.735 "params": { 00:20:52.735 "small_cache_size": 128, 00:20:52.735 "large_cache_size": 16, 00:20:52.735 "task_count": 2048, 00:20:52.735 "sequence_count": 2048, 00:20:52.735 "buf_count": 2048 00:20:52.735 } 00:20:52.735 } 00:20:52.735 ] 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "subsystem": "bdev", 00:20:52.735 "config": [ 00:20:52.735 { 00:20:52.735 "method": "bdev_set_options", 00:20:52.735 "params": { 00:20:52.735 "bdev_io_pool_size": 65535, 00:20:52.735 "bdev_io_cache_size": 256, 00:20:52.735 "bdev_auto_examine": true, 00:20:52.735 "iobuf_small_cache_size": 128, 00:20:52.735 "iobuf_large_cache_size": 16 00:20:52.735 } 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "method": "bdev_raid_set_options", 00:20:52.735 "params": { 00:20:52.735 "process_window_size_kb": 1024 00:20:52.735 } 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "method": "bdev_iscsi_set_options", 00:20:52.735 "params": { 00:20:52.735 "timeout_sec": 30 00:20:52.735 } 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "method": "bdev_nvme_set_options", 00:20:52.735 "params": { 00:20:52.735 "action_on_timeout": "none", 00:20:52.735 "timeout_us": 0, 00:20:52.735 "timeout_admin_us": 0, 00:20:52.735 "keep_alive_timeout_ms": 10000, 00:20:52.735 "transport_retry_count": 4, 00:20:52.735 "arbitration_burst": 0, 00:20:52.735 "low_priority_weight": 0, 00:20:52.735 "medium_priority_weight": 0, 00:20:52.735 "high_priority_weight": 0, 00:20:52.735 "nvme_adminq_poll_period_us": 10000, 00:20:52.735 "nvme_ioq_poll_period_us": 0, 00:20:52.735 "io_queue_requests": 512, 00:20:52.735 "delay_cmd_submit": true, 00:20:52.735 "bdev_retry_count": 3, 00:20:52.735 "transport_ack_timeout": 0, 00:20:52.735 "ctrlr_loss_timeout_sec": 0, 00:20:52.735 "reconnect_delay_sec": 0, 00:20:52.735 "fast_io_fail_timeout_sec": 0, 00:20:52.735 "generate_uuids": false, 00:20:52.735 "transport_tos": 0, 00:20:52.735 "io_path_stat": false, 00:20:52.735 "allow_accel_sequence": false 00:20:52.735 } 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "method": "bdev_nvme_attach_controller", 00:20:52.735 "params": { 00:20:52.735 "name": "TLSTEST", 00:20:52.735 "trtype": "TCP", 00:20:52.735 "adrfam": "IPv4", 00:20:52.735 "traddr": "10.0.0.2", 00:20:52.735 "trsvcid": "4420", 00:20:52.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:52.735 "prchk_reftag": false, 00:20:52.735 "prchk_guard": false, 00:20:52.735 "ctrlr_loss_timeout_sec": 0, 00:20:52.735 "reconnect_delay_sec": 0, 00:20:52.735 "fast_io_fail_timeout_sec": 0, 00:20:52.735 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:52.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:52.735 "hdgst": false, 00:20:52.735 "ddgst": false 00:20:52.735 } 00:20:52.735 }, 00:20:52.735 { 00:20:52.735 "method": "bdev_nvme_set_hotplug", 00:20:52.735 "params": { 00:20:52.735 "period_us": 100000, 00:20:52.736 "enable": false 00:20:52.736 } 00:20:52.736 }, 00:20:52.736 { 00:20:52.736 "method": "bdev_wait_for_examine" 00:20:52.736 } 00:20:52.736 ] 00:20:52.736 }, 00:20:52.736 { 00:20:52.736 "subsystem": "nbd", 00:20:52.736 "config": [] 00:20:52.736 } 00:20:52.736 ] 00:20:52.736 }' 00:20:52.736 22:19:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:52.736 22:19:47 -- common/autotest_common.sh@10 -- # set +x 00:20:52.736 [2024-07-24 22:19:47.737011] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:20:52.736 [2024-07-24 22:19:47.737067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3597803 ] 00:20:52.736 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.736 [2024-07-24 22:19:47.787276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.736 [2024-07-24 22:19:47.825519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.995 [2024-07-24 22:19:47.952814] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:53.562 22:19:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:53.562 22:19:48 -- common/autotest_common.sh@852 -- # return 0 00:20:53.562 22:19:48 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:53.562 Running I/O for 10 seconds... 00:21:05.775 00:21:05.775 Latency(us) 00:21:05.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.775 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:05.775 Verification LBA range: start 0x0 length 0x2000 00:21:05.775 TLSTESTn1 : 10.04 1382.74 5.40 0.00 0.00 92444.19 6810.05 120358.29 00:21:05.775 =================================================================================================================== 00:21:05.775 Total : 1382.74 5.40 0.00 0.00 92444.19 6810.05 120358.29 00:21:05.775 0 00:21:05.775 22:19:58 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:05.775 22:19:58 -- target/tls.sh@223 -- # killprocess 3597803 00:21:05.775 22:19:58 -- common/autotest_common.sh@926 -- # '[' -z 3597803 ']' 00:21:05.775 22:19:58 -- common/autotest_common.sh@930 -- # kill -0 3597803 00:21:05.775 22:19:58 -- common/autotest_common.sh@931 -- # uname 00:21:05.776 22:19:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:05.776 22:19:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3597803 00:21:05.776 22:19:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:05.776 22:19:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:05.776 22:19:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3597803' 00:21:05.776 killing process with pid 3597803 00:21:05.776 22:19:58 -- common/autotest_common.sh@945 -- # kill 3597803 00:21:05.776 Received shutdown signal, test time was about 10.000000 seconds 00:21:05.776 00:21:05.776 Latency(us) 00:21:05.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.776 =================================================================================================================== 00:21:05.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.776 22:19:58 -- common/autotest_common.sh@950 -- # wait 3597803 00:21:05.776 22:19:58 -- target/tls.sh@224 -- # killprocess 3597672 00:21:05.776 22:19:58 -- common/autotest_common.sh@926 -- # '[' -z 3597672 ']' 00:21:05.776 22:19:58 -- common/autotest_common.sh@930 -- # kill -0 3597672 00:21:05.776 22:19:58 -- common/autotest_common.sh@931 -- # uname 00:21:05.776 22:19:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:05.776 22:19:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3597672 00:21:05.776 22:19:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:05.776 22:19:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:05.776 22:19:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3597672' 00:21:05.776 killing process with pid 3597672 00:21:05.776 22:19:58 -- common/autotest_common.sh@945 -- # kill 3597672 00:21:05.776 22:19:58 -- common/autotest_common.sh@950 -- # wait 3597672 00:21:05.776 22:19:59 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:21:05.776 22:19:59 -- target/tls.sh@227 -- # cleanup 00:21:05.776 22:19:59 -- target/tls.sh@15 -- # process_shm --id 0 00:21:05.776 22:19:59 -- common/autotest_common.sh@796 -- # type=--id 00:21:05.776 22:19:59 -- common/autotest_common.sh@797 -- # id=0 00:21:05.776 22:19:59 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:05.776 22:19:59 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:05.776 22:19:59 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:05.776 22:19:59 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:05.776 22:19:59 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:05.776 22:19:59 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:05.776 nvmf_trace.0 00:21:05.776 22:19:59 -- common/autotest_common.sh@811 -- # return 0 00:21:05.776 22:19:59 -- target/tls.sh@16 -- # killprocess 3597803 00:21:05.776 22:19:59 -- common/autotest_common.sh@926 -- # '[' -z 3597803 ']' 00:21:05.776 22:19:59 -- common/autotest_common.sh@930 -- # kill -0 3597803 00:21:05.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3597803) - No such process 00:21:05.776 22:19:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3597803 is not found' 00:21:05.776 Process with pid 3597803 is not found 00:21:05.776 22:19:59 -- target/tls.sh@17 -- # nvmftestfini 00:21:05.776 22:19:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:05.776 22:19:59 -- nvmf/common.sh@116 -- # sync 00:21:05.776 22:19:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:05.776 22:19:59 -- nvmf/common.sh@119 -- # set +e 00:21:05.776 22:19:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:05.776 22:19:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:05.776 rmmod nvme_tcp 00:21:05.776 rmmod nvme_fabrics 00:21:05.776 rmmod nvme_keyring 00:21:05.776 22:19:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:05.776 22:19:59 -- nvmf/common.sh@123 -- # set -e 00:21:05.776 22:19:59 -- nvmf/common.sh@124 -- # return 0 00:21:05.776 22:19:59 -- nvmf/common.sh@477 -- # '[' -n 3597672 ']' 00:21:05.776 22:19:59 -- nvmf/common.sh@478 -- # killprocess 3597672 00:21:05.776 22:19:59 -- common/autotest_common.sh@926 -- # '[' -z 3597672 ']' 00:21:05.776 22:19:59 -- common/autotest_common.sh@930 -- # kill -0 3597672 00:21:05.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3597672) - No such process 00:21:05.776 22:19:59 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3597672 is not found' 00:21:05.776 Process with pid 3597672 is not found 00:21:05.776 22:19:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:05.776 22:19:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:05.776 22:19:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:05.776 22:19:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.776 22:19:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:05.776 22:19:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.776 22:19:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:05.776 22:19:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.346 22:20:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:06.346 22:20:01 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:06.346 00:21:06.346 real 1m11.041s 00:21:06.346 user 1m49.455s 00:21:06.346 sys 0m22.934s 00:21:06.346 22:20:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:06.346 22:20:01 -- common/autotest_common.sh@10 -- # set +x 00:21:06.346 ************************************ 00:21:06.346 END TEST nvmf_tls 00:21:06.346 ************************************ 00:21:06.346 22:20:01 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:06.346 22:20:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:06.346 22:20:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:06.346 22:20:01 -- common/autotest_common.sh@10 -- # set +x 00:21:06.346 ************************************ 00:21:06.346 START TEST nvmf_fips 00:21:06.346 ************************************ 00:21:06.346 22:20:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:06.346 * Looking for test storage... 00:21:06.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:06.346 22:20:01 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.346 22:20:01 -- nvmf/common.sh@7 -- # uname -s 00:21:06.346 22:20:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.346 22:20:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.346 22:20:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.346 22:20:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.346 22:20:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.346 22:20:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.346 22:20:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.346 22:20:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.346 22:20:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.607 22:20:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.607 22:20:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.607 22:20:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.607 22:20:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.607 22:20:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.607 22:20:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.607 22:20:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.607 22:20:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.607 22:20:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.607 22:20:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.607 22:20:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.607 22:20:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.607 22:20:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.607 22:20:01 -- paths/export.sh@5 -- # export PATH 00:21:06.607 22:20:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.607 22:20:01 -- nvmf/common.sh@46 -- # : 0 00:21:06.607 22:20:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:06.607 22:20:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:06.607 22:20:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:06.607 22:20:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.607 22:20:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.608 22:20:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:06.608 22:20:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:06.608 22:20:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:06.608 22:20:01 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:06.608 22:20:01 -- fips/fips.sh@89 -- # check_openssl_version 00:21:06.608 22:20:01 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:06.608 22:20:01 -- fips/fips.sh@85 -- # openssl version 00:21:06.608 22:20:01 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:06.608 22:20:01 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:06.608 22:20:01 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:06.608 22:20:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:06.608 22:20:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:06.608 22:20:01 -- scripts/common.sh@335 -- # IFS=.-: 00:21:06.608 22:20:01 -- scripts/common.sh@335 -- # read -ra ver1 00:21:06.608 22:20:01 -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.608 22:20:01 -- scripts/common.sh@336 -- # read -ra ver2 00:21:06.608 22:20:01 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:06.608 22:20:01 -- scripts/common.sh@339 -- # ver1_l=3 00:21:06.608 22:20:01 -- scripts/common.sh@340 -- # ver2_l=3 00:21:06.608 22:20:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:06.608 22:20:01 -- scripts/common.sh@343 -- # case "$op" in 00:21:06.608 22:20:01 -- scripts/common.sh@347 -- # : 1 00:21:06.608 22:20:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:06.608 22:20:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.608 22:20:01 -- scripts/common.sh@364 -- # decimal 3 00:21:06.608 22:20:01 -- scripts/common.sh@352 -- # local d=3 00:21:06.608 22:20:01 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:06.608 22:20:01 -- scripts/common.sh@354 -- # echo 3 00:21:06.608 22:20:01 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:06.608 22:20:01 -- scripts/common.sh@365 -- # decimal 3 00:21:06.608 22:20:01 -- scripts/common.sh@352 -- # local d=3 00:21:06.608 22:20:01 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:06.608 22:20:01 -- scripts/common.sh@354 -- # echo 3 00:21:06.608 22:20:01 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:06.608 22:20:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:06.608 22:20:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:06.608 22:20:01 -- scripts/common.sh@363 -- # (( v++ )) 00:21:06.608 22:20:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.608 22:20:01 -- scripts/common.sh@364 -- # decimal 0 00:21:06.608 22:20:01 -- scripts/common.sh@352 -- # local d=0 00:21:06.608 22:20:01 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:06.608 22:20:01 -- scripts/common.sh@354 -- # echo 0 00:21:06.608 22:20:01 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:06.608 22:20:01 -- scripts/common.sh@365 -- # decimal 0 00:21:06.608 22:20:01 -- scripts/common.sh@352 -- # local d=0 00:21:06.608 22:20:01 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:06.608 22:20:01 -- scripts/common.sh@354 -- # echo 0 00:21:06.608 22:20:01 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:06.608 22:20:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:06.608 22:20:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:06.608 22:20:01 -- scripts/common.sh@363 -- # (( v++ )) 00:21:06.608 22:20:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.608 22:20:01 -- scripts/common.sh@364 -- # decimal 9 00:21:06.608 22:20:01 -- scripts/common.sh@352 -- # local d=9 00:21:06.608 22:20:01 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:06.608 22:20:01 -- scripts/common.sh@354 -- # echo 9 00:21:06.608 22:20:01 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:06.608 22:20:01 -- scripts/common.sh@365 -- # decimal 0 00:21:06.608 22:20:01 -- scripts/common.sh@352 -- # local d=0 00:21:06.608 22:20:01 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:06.608 22:20:01 -- scripts/common.sh@354 -- # echo 0 00:21:06.608 22:20:01 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:06.608 22:20:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:06.608 22:20:01 -- scripts/common.sh@366 -- # return 0 00:21:06.608 22:20:01 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:06.608 22:20:01 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:06.608 22:20:01 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:06.608 22:20:01 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:06.608 22:20:01 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:06.608 22:20:01 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:06.608 22:20:01 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:06.608 22:20:01 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:06.608 22:20:01 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:06.608 22:20:01 -- fips/fips.sh@114 -- # build_openssl_config 00:21:06.608 22:20:01 -- fips/fips.sh@37 -- # cat 00:21:06.608 22:20:01 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:06.608 22:20:01 -- fips/fips.sh@58 -- # cat - 00:21:06.608 22:20:01 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:06.608 22:20:01 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:06.608 22:20:01 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:06.608 22:20:01 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:06.608 22:20:01 -- fips/fips.sh@117 -- # openssl list -providers 00:21:06.608 22:20:01 -- fips/fips.sh@117 -- # grep name 00:21:06.608 22:20:01 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:06.608 22:20:01 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:06.608 22:20:01 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:06.608 22:20:01 -- fips/fips.sh@128 -- # : 00:21:06.608 22:20:01 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:06.608 22:20:01 -- common/autotest_common.sh@640 -- # local es=0 00:21:06.608 22:20:01 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:06.608 22:20:01 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:06.608 22:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:06.608 22:20:01 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:06.608 22:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:06.608 22:20:01 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:06.608 22:20:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:06.608 22:20:01 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:06.608 22:20:01 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:06.608 22:20:01 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:06.608 Error setting digest 00:21:06.608 00B22A0AD07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:06.608 00B22A0AD07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:06.608 22:20:01 -- common/autotest_common.sh@643 -- # es=1 00:21:06.608 22:20:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:06.608 22:20:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:06.608 22:20:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:06.608 22:20:01 -- fips/fips.sh@131 -- # nvmftestinit 00:21:06.608 22:20:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:06.608 22:20:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.608 22:20:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:06.608 22:20:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:06.608 22:20:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:06.608 22:20:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.608 22:20:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:06.608 22:20:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.608 22:20:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:06.608 22:20:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:06.608 22:20:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:06.608 22:20:01 -- common/autotest_common.sh@10 -- # set +x 00:21:11.883 22:20:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:11.883 22:20:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:11.883 22:20:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:11.883 22:20:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:11.883 22:20:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:11.883 22:20:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:11.883 22:20:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:11.883 22:20:06 -- nvmf/common.sh@294 -- # net_devs=() 00:21:11.883 22:20:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:11.883 22:20:06 -- nvmf/common.sh@295 -- # e810=() 00:21:11.883 22:20:06 -- nvmf/common.sh@295 -- # local -ga e810 00:21:11.883 22:20:06 -- nvmf/common.sh@296 -- # x722=() 00:21:11.883 22:20:06 -- nvmf/common.sh@296 -- # local -ga x722 00:21:11.883 22:20:06 -- nvmf/common.sh@297 -- # mlx=() 00:21:11.883 22:20:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:11.883 22:20:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.883 22:20:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:11.883 22:20:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:11.883 22:20:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:11.883 22:20:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.883 22:20:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:11.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:11.883 22:20:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:11.883 22:20:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:11.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:11.883 22:20:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:11.883 22:20:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.883 22:20:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.883 22:20:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.883 22:20:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.883 22:20:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:11.883 Found net devices under 0000:86:00.0: cvl_0_0 00:21:11.883 22:20:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.883 22:20:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:11.883 22:20:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.883 22:20:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:11.883 22:20:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.883 22:20:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:11.883 Found net devices under 0000:86:00.1: cvl_0_1 00:21:11.883 22:20:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.883 22:20:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:11.883 22:20:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:11.883 22:20:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:11.883 22:20:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:11.883 22:20:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.883 22:20:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.883 22:20:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.883 22:20:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:11.883 22:20:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.883 22:20:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.883 22:20:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:11.883 22:20:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.883 22:20:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.883 22:20:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:11.883 22:20:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:11.883 22:20:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.883 22:20:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.883 22:20:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.883 22:20:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.883 22:20:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:12.143 22:20:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.143 22:20:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.143 22:20:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.143 22:20:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:12.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:21:12.143 00:21:12.143 --- 10.0.0.2 ping statistics --- 00:21:12.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.143 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:21:12.143 22:20:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.430 ms 00:21:12.143 00:21:12.143 --- 10.0.0.1 ping statistics --- 00:21:12.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.143 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:21:12.143 22:20:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.143 22:20:07 -- nvmf/common.sh@410 -- # return 0 00:21:12.143 22:20:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:12.143 22:20:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.143 22:20:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:12.143 22:20:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:12.143 22:20:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.143 22:20:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:12.143 22:20:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:12.143 22:20:07 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:12.143 22:20:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:12.143 22:20:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:12.143 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:21:12.143 22:20:07 -- nvmf/common.sh@469 -- # nvmfpid=3603140 00:21:12.143 22:20:07 -- nvmf/common.sh@470 -- # waitforlisten 3603140 00:21:12.143 22:20:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:12.143 22:20:07 -- common/autotest_common.sh@819 -- # '[' -z 3603140 ']' 00:21:12.143 22:20:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.143 22:20:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:12.143 22:20:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.143 22:20:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:12.143 22:20:07 -- common/autotest_common.sh@10 -- # set +x 00:21:12.143 [2024-07-24 22:20:07.244224] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:21:12.143 [2024-07-24 22:20:07.244275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.143 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.402 [2024-07-24 22:20:07.303530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.402 [2024-07-24 22:20:07.341069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:12.402 [2024-07-24 22:20:07.341182] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.402 [2024-07-24 22:20:07.341190] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.402 [2024-07-24 22:20:07.341196] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.402 [2024-07-24 22:20:07.341211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.970 22:20:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:12.970 22:20:08 -- common/autotest_common.sh@852 -- # return 0 00:21:12.970 22:20:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:12.970 22:20:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:12.970 22:20:08 -- common/autotest_common.sh@10 -- # set +x 00:21:12.970 22:20:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.970 22:20:08 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:12.970 22:20:08 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:12.970 22:20:08 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:12.970 22:20:08 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:12.970 22:20:08 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:12.970 22:20:08 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:12.970 22:20:08 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:12.970 22:20:08 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.230 [2024-07-24 22:20:08.210783] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.230 [2024-07-24 22:20:08.226791] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:13.230 [2024-07-24 22:20:08.226954] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.230 malloc0 00:21:13.230 22:20:08 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:13.230 22:20:08 -- fips/fips.sh@148 -- # bdevperf_pid=3603387 00:21:13.230 22:20:08 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:13.230 22:20:08 -- fips/fips.sh@149 -- # waitforlisten 3603387 /var/tmp/bdevperf.sock 00:21:13.230 22:20:08 -- common/autotest_common.sh@819 -- # '[' -z 3603387 ']' 00:21:13.230 22:20:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.230 22:20:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:13.230 22:20:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.230 22:20:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:13.230 22:20:08 -- common/autotest_common.sh@10 -- # set +x 00:21:13.230 [2024-07-24 22:20:08.344424] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:21:13.230 [2024-07-24 22:20:08.344473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603387 ] 00:21:13.490 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.490 [2024-07-24 22:20:08.393770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.490 [2024-07-24 22:20:08.431416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.091 22:20:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:14.091 22:20:09 -- common/autotest_common.sh@852 -- # return 0 00:21:14.091 22:20:09 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:14.357 [2024-07-24 22:20:09.263977] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.357 TLSTESTn1 00:21:14.357 22:20:09 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.357 Running I/O for 10 seconds... 00:21:26.561 00:21:26.561 Latency(us) 00:21:26.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.561 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:26.561 Verification LBA range: start 0x0 length 0x2000 00:21:26.561 TLSTESTn1 : 10.05 1289.50 5.04 0.00 0.00 99091.99 5157.40 126740.93 00:21:26.561 =================================================================================================================== 00:21:26.561 Total : 1289.50 5.04 0.00 0.00 99091.99 5157.40 126740.93 00:21:26.561 0 00:21:26.561 22:20:19 -- fips/fips.sh@1 -- # cleanup 00:21:26.561 22:20:19 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:26.561 22:20:19 -- common/autotest_common.sh@796 -- # type=--id 00:21:26.561 22:20:19 -- common/autotest_common.sh@797 -- # id=0 00:21:26.561 22:20:19 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:26.561 22:20:19 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:26.561 22:20:19 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:26.561 22:20:19 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:26.561 22:20:19 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:26.561 22:20:19 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:26.561 nvmf_trace.0 00:21:26.561 22:20:19 -- common/autotest_common.sh@811 -- # return 0 00:21:26.561 22:20:19 -- fips/fips.sh@16 -- # killprocess 3603387 00:21:26.561 22:20:19 -- common/autotest_common.sh@926 -- # '[' -z 3603387 ']' 00:21:26.561 22:20:19 -- common/autotest_common.sh@930 -- # kill -0 3603387 00:21:26.561 22:20:19 -- common/autotest_common.sh@931 -- # uname 00:21:26.561 22:20:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:26.561 22:20:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3603387 00:21:26.561 22:20:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:26.561 22:20:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:26.561 22:20:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3603387' 00:21:26.561 killing process with pid 3603387 00:21:26.561 22:20:19 -- common/autotest_common.sh@945 -- # kill 3603387 00:21:26.561 Received shutdown signal, test time was about 10.000000 seconds 00:21:26.561 00:21:26.561 Latency(us) 00:21:26.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.561 =================================================================================================================== 00:21:26.561 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.561 22:20:19 -- common/autotest_common.sh@950 -- # wait 3603387 00:21:26.561 22:20:19 -- fips/fips.sh@17 -- # nvmftestfini 00:21:26.561 22:20:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:26.561 22:20:19 -- nvmf/common.sh@116 -- # sync 00:21:26.561 22:20:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:26.561 22:20:19 -- nvmf/common.sh@119 -- # set +e 00:21:26.561 22:20:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:26.561 22:20:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:26.561 rmmod nvme_tcp 00:21:26.561 rmmod nvme_fabrics 00:21:26.561 rmmod nvme_keyring 00:21:26.561 22:20:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:26.561 22:20:19 -- nvmf/common.sh@123 -- # set -e 00:21:26.561 22:20:19 -- nvmf/common.sh@124 -- # return 0 00:21:26.561 22:20:19 -- nvmf/common.sh@477 -- # '[' -n 3603140 ']' 00:21:26.561 22:20:19 -- nvmf/common.sh@478 -- # killprocess 3603140 00:21:26.561 22:20:19 -- common/autotest_common.sh@926 -- # '[' -z 3603140 ']' 00:21:26.561 22:20:19 -- common/autotest_common.sh@930 -- # kill -0 3603140 00:21:26.561 22:20:19 -- common/autotest_common.sh@931 -- # uname 00:21:26.561 22:20:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:26.561 22:20:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3603140 00:21:26.561 22:20:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:26.561 22:20:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:26.561 22:20:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3603140' 00:21:26.561 killing process with pid 3603140 00:21:26.561 22:20:19 -- common/autotest_common.sh@945 -- # kill 3603140 00:21:26.561 22:20:19 -- common/autotest_common.sh@950 -- # wait 3603140 00:21:26.561 22:20:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:26.561 22:20:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:26.561 22:20:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:26.561 22:20:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:26.561 22:20:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:26.561 22:20:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.561 22:20:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.561 22:20:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.131 22:20:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:27.131 22:20:22 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:27.131 00:21:27.131 real 0m20.793s 00:21:27.131 user 0m23.358s 00:21:27.131 sys 0m8.184s 00:21:27.131 22:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:27.131 22:20:22 -- common/autotest_common.sh@10 -- # set +x 00:21:27.131 ************************************ 00:21:27.131 END TEST nvmf_fips 00:21:27.131 ************************************ 00:21:27.131 22:20:22 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:27.131 22:20:22 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:27.131 22:20:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:27.131 22:20:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:27.131 22:20:22 -- common/autotest_common.sh@10 -- # set +x 00:21:27.131 ************************************ 00:21:27.131 START TEST nvmf_fuzz 00:21:27.131 ************************************ 00:21:27.131 22:20:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:27.391 * Looking for test storage... 00:21:27.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:27.391 22:20:22 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.391 22:20:22 -- nvmf/common.sh@7 -- # uname -s 00:21:27.391 22:20:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.391 22:20:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.391 22:20:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.391 22:20:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.391 22:20:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.391 22:20:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.391 22:20:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.391 22:20:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.391 22:20:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.391 22:20:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.391 22:20:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.391 22:20:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:27.391 22:20:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.391 22:20:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.391 22:20:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.391 22:20:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.391 22:20:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.391 22:20:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.391 22:20:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.391 22:20:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.391 22:20:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.391 22:20:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.391 22:20:22 -- paths/export.sh@5 -- # export PATH 00:21:27.391 22:20:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.391 22:20:22 -- nvmf/common.sh@46 -- # : 0 00:21:27.391 22:20:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:27.391 22:20:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:27.391 22:20:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:27.391 22:20:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.391 22:20:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.391 22:20:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:27.391 22:20:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:27.391 22:20:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:27.391 22:20:22 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:27.391 22:20:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:27.391 22:20:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.391 22:20:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:27.391 22:20:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:27.391 22:20:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:27.391 22:20:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.391 22:20:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.391 22:20:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.391 22:20:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:27.391 22:20:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:27.391 22:20:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:27.391 22:20:22 -- common/autotest_common.sh@10 -- # set +x 00:21:32.670 22:20:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:32.670 22:20:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:32.670 22:20:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:32.670 22:20:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:32.670 22:20:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:32.670 22:20:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:32.670 22:20:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:32.670 22:20:27 -- nvmf/common.sh@294 -- # net_devs=() 00:21:32.670 22:20:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:32.670 22:20:27 -- nvmf/common.sh@295 -- # e810=() 00:21:32.670 22:20:27 -- nvmf/common.sh@295 -- # local -ga e810 00:21:32.670 22:20:27 -- nvmf/common.sh@296 -- # x722=() 00:21:32.670 22:20:27 -- nvmf/common.sh@296 -- # local -ga x722 00:21:32.670 22:20:27 -- nvmf/common.sh@297 -- # mlx=() 00:21:32.670 22:20:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:32.670 22:20:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.670 22:20:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:32.670 22:20:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:32.670 22:20:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:32.670 22:20:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:32.670 22:20:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:32.670 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:32.670 22:20:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:32.670 22:20:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:32.670 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:32.670 22:20:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:32.670 22:20:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:32.670 22:20:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.670 22:20:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:32.670 22:20:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.670 22:20:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:32.670 Found net devices under 0000:86:00.0: cvl_0_0 00:21:32.670 22:20:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.670 22:20:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:32.670 22:20:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.670 22:20:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:32.670 22:20:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.670 22:20:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:32.670 Found net devices under 0000:86:00.1: cvl_0_1 00:21:32.670 22:20:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.670 22:20:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:32.670 22:20:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:32.670 22:20:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:32.670 22:20:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:32.670 22:20:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.670 22:20:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.670 22:20:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.670 22:20:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:32.670 22:20:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.670 22:20:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.670 22:20:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:32.670 22:20:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.670 22:20:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.670 22:20:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:32.670 22:20:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:32.670 22:20:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.670 22:20:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.930 22:20:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.930 22:20:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.930 22:20:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:32.930 22:20:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.930 22:20:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.930 22:20:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.930 22:20:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:32.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:21:32.930 00:21:32.930 --- 10.0.0.2 ping statistics --- 00:21:32.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.930 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:21:32.930 22:20:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:21:32.930 00:21:32.930 --- 10.0.0.1 ping statistics --- 00:21:32.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.930 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:21:32.930 22:20:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.930 22:20:28 -- nvmf/common.sh@410 -- # return 0 00:21:32.930 22:20:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:32.930 22:20:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.930 22:20:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:32.930 22:20:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:32.930 22:20:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.930 22:20:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:32.930 22:20:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:32.930 22:20:28 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:32.930 22:20:28 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3608787 00:21:32.930 22:20:28 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:32.930 22:20:28 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3608787 00:21:32.930 22:20:28 -- common/autotest_common.sh@819 -- # '[' -z 3608787 ']' 00:21:32.930 22:20:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.930 22:20:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:32.930 22:20:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.930 22:20:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:32.930 22:20:28 -- common/autotest_common.sh@10 -- # set +x 00:21:33.869 22:20:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:33.869 22:20:28 -- common/autotest_common.sh@852 -- # return 0 00:21:33.869 22:20:28 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:33.869 22:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.869 22:20:28 -- common/autotest_common.sh@10 -- # set +x 00:21:33.869 22:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.869 22:20:28 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:33.869 22:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.869 22:20:28 -- common/autotest_common.sh@10 -- # set +x 00:21:33.869 Malloc0 00:21:33.869 22:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.869 22:20:28 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.869 22:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.869 22:20:28 -- common/autotest_common.sh@10 -- # set +x 00:21:33.869 22:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.869 22:20:28 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:33.869 22:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.869 22:20:28 -- common/autotest_common.sh@10 -- # set +x 00:21:33.869 22:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.869 22:20:28 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:33.869 22:20:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.869 22:20:28 -- common/autotest_common.sh@10 -- # set +x 00:21:33.869 22:20:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.869 22:20:28 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:33.869 22:20:28 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:22:05.956 Fuzzing completed. Shutting down the fuzz application 00:22:05.956 00:22:05.956 Dumping successful admin opcodes: 00:22:05.956 8, 9, 10, 24, 00:22:05.956 Dumping successful io opcodes: 00:22:05.956 0, 9, 00:22:05.957 NS: 0x200003aeff00 I/O qp, Total commands completed: 897641, total successful commands: 5232, random_seed: 2468475520 00:22:05.957 NS: 0x200003aeff00 admin qp, Total commands completed: 103681, total successful commands: 856, random_seed: 3898624960 00:22:05.957 22:20:59 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:05.957 Fuzzing completed. Shutting down the fuzz application 00:22:05.957 00:22:05.957 Dumping successful admin opcodes: 00:22:05.957 24, 00:22:05.957 Dumping successful io opcodes: 00:22:05.957 00:22:05.957 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1332495900 00:22:05.957 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1332573080 00:22:05.957 22:21:00 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.957 22:21:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.957 22:21:00 -- common/autotest_common.sh@10 -- # set +x 00:22:05.957 22:21:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.957 22:21:00 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:05.957 22:21:00 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:05.957 22:21:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:05.957 22:21:00 -- nvmf/common.sh@116 -- # sync 00:22:05.957 22:21:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:05.957 22:21:00 -- nvmf/common.sh@119 -- # set +e 00:22:05.957 22:21:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:05.957 22:21:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:05.957 rmmod nvme_tcp 00:22:05.957 rmmod nvme_fabrics 00:22:05.957 rmmod nvme_keyring 00:22:05.957 22:21:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:05.957 22:21:00 -- nvmf/common.sh@123 -- # set -e 00:22:05.957 22:21:00 -- nvmf/common.sh@124 -- # return 0 00:22:05.957 22:21:00 -- nvmf/common.sh@477 -- # '[' -n 3608787 ']' 00:22:05.957 22:21:00 -- nvmf/common.sh@478 -- # killprocess 3608787 00:22:05.957 22:21:00 -- common/autotest_common.sh@926 -- # '[' -z 3608787 ']' 00:22:05.957 22:21:00 -- common/autotest_common.sh@930 -- # kill -0 3608787 00:22:05.957 22:21:00 -- common/autotest_common.sh@931 -- # uname 00:22:05.957 22:21:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:05.957 22:21:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3608787 00:22:05.957 22:21:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:05.957 22:21:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:05.957 22:21:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3608787' 00:22:05.957 killing process with pid 3608787 00:22:05.957 22:21:00 -- common/autotest_common.sh@945 -- # kill 3608787 00:22:05.957 22:21:00 -- common/autotest_common.sh@950 -- # wait 3608787 00:22:05.957 22:21:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:05.957 22:21:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:05.957 22:21:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:05.957 22:21:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:05.957 22:21:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:05.957 22:21:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.957 22:21:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.957 22:21:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.498 22:21:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:08.498 22:21:03 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:08.498 00:22:08.498 real 0m40.838s 00:22:08.498 user 0m53.297s 00:22:08.498 sys 0m16.869s 00:22:08.498 22:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.498 22:21:03 -- common/autotest_common.sh@10 -- # set +x 00:22:08.498 ************************************ 00:22:08.498 END TEST nvmf_fuzz 00:22:08.498 ************************************ 00:22:08.498 22:21:03 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:08.498 22:21:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:08.498 22:21:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:08.498 22:21:03 -- common/autotest_common.sh@10 -- # set +x 00:22:08.498 ************************************ 00:22:08.498 START TEST nvmf_multiconnection 00:22:08.498 ************************************ 00:22:08.498 22:21:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:22:08.498 * Looking for test storage... 00:22:08.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:08.498 22:21:03 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.498 22:21:03 -- nvmf/common.sh@7 -- # uname -s 00:22:08.498 22:21:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.498 22:21:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.498 22:21:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.498 22:21:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.498 22:21:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.498 22:21:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.498 22:21:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.498 22:21:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.498 22:21:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.498 22:21:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.498 22:21:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.498 22:21:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:08.498 22:21:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.498 22:21:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.498 22:21:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.498 22:21:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.498 22:21:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.498 22:21:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.498 22:21:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.498 22:21:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.498 22:21:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.498 22:21:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.498 22:21:03 -- paths/export.sh@5 -- # export PATH 00:22:08.498 22:21:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.498 22:21:03 -- nvmf/common.sh@46 -- # : 0 00:22:08.498 22:21:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:08.498 22:21:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:08.498 22:21:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:08.498 22:21:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.498 22:21:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.498 22:21:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:08.498 22:21:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:08.498 22:21:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:08.498 22:21:03 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:08.498 22:21:03 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:08.498 22:21:03 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:08.498 22:21:03 -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:08.498 22:21:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:08.498 22:21:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.498 22:21:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:08.498 22:21:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:08.498 22:21:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:08.498 22:21:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.498 22:21:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.498 22:21:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.498 22:21:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:08.498 22:21:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:08.498 22:21:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:08.498 22:21:03 -- common/autotest_common.sh@10 -- # set +x 00:22:13.771 22:21:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:13.771 22:21:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:13.771 22:21:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:13.771 22:21:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:13.771 22:21:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:13.771 22:21:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:13.771 22:21:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:13.771 22:21:08 -- nvmf/common.sh@294 -- # net_devs=() 00:22:13.771 22:21:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:13.771 22:21:08 -- nvmf/common.sh@295 -- # e810=() 00:22:13.771 22:21:08 -- nvmf/common.sh@295 -- # local -ga e810 00:22:13.771 22:21:08 -- nvmf/common.sh@296 -- # x722=() 00:22:13.771 22:21:08 -- nvmf/common.sh@296 -- # local -ga x722 00:22:13.771 22:21:08 -- nvmf/common.sh@297 -- # mlx=() 00:22:13.771 22:21:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:13.771 22:21:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.771 22:21:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:13.771 22:21:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:13.771 22:21:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:13.771 22:21:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:13.771 22:21:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:13.771 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:13.771 22:21:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:13.771 22:21:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:13.771 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:13.771 22:21:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:13.771 22:21:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:13.771 22:21:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.771 22:21:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:13.771 22:21:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.771 22:21:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:13.771 Found net devices under 0000:86:00.0: cvl_0_0 00:22:13.771 22:21:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.771 22:21:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:13.771 22:21:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.771 22:21:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:13.771 22:21:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.771 22:21:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:13.771 Found net devices under 0000:86:00.1: cvl_0_1 00:22:13.771 22:21:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.771 22:21:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:13.771 22:21:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:13.771 22:21:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:13.771 22:21:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:13.771 22:21:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.771 22:21:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.771 22:21:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.771 22:21:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:13.771 22:21:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.771 22:21:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.771 22:21:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:13.771 22:21:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.771 22:21:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.771 22:21:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:13.771 22:21:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:13.771 22:21:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.771 22:21:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.771 22:21:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.771 22:21:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.771 22:21:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:13.771 22:21:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.771 22:21:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.771 22:21:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.771 22:21:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:13.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:22:13.771 00:22:13.771 --- 10.0.0.2 ping statistics --- 00:22:13.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.771 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:13.771 22:21:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:22:13.772 00:22:13.772 --- 10.0.0.1 ping statistics --- 00:22:13.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.772 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:22:13.772 22:21:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.772 22:21:08 -- nvmf/common.sh@410 -- # return 0 00:22:13.772 22:21:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:13.772 22:21:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.772 22:21:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:13.772 22:21:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:13.772 22:21:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.772 22:21:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:13.772 22:21:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:13.772 22:21:08 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:13.772 22:21:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:13.772 22:21:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:13.772 22:21:08 -- common/autotest_common.sh@10 -- # set +x 00:22:13.772 22:21:08 -- nvmf/common.sh@469 -- # nvmfpid=3617961 00:22:13.772 22:21:08 -- nvmf/common.sh@470 -- # waitforlisten 3617961 00:22:13.772 22:21:08 -- common/autotest_common.sh@819 -- # '[' -z 3617961 ']' 00:22:13.772 22:21:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.772 22:21:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:13.772 22:21:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.772 22:21:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:13.772 22:21:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:13.772 22:21:08 -- common/autotest_common.sh@10 -- # set +x 00:22:13.772 [2024-07-24 22:21:08.349127] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:22:13.772 [2024-07-24 22:21:08.349169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.772 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.772 [2024-07-24 22:21:08.402871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.772 [2024-07-24 22:21:08.444397] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:13.772 [2024-07-24 22:21:08.444509] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.772 [2024-07-24 22:21:08.444517] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.772 [2024-07-24 22:21:08.444524] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.772 [2024-07-24 22:21:08.444568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.772 [2024-07-24 22:21:08.444667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.772 [2024-07-24 22:21:08.444757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.772 [2024-07-24 22:21:08.444758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.098 22:21:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:14.098 22:21:09 -- common/autotest_common.sh@852 -- # return 0 00:22:14.098 22:21:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:14.098 22:21:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:14.098 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.358 22:21:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.358 22:21:09 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:14.358 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 [2024-07-24 22:21:09.216620] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:14.359 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.359 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 Malloc1 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 [2024-07-24 22:21:09.272471] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.359 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 Malloc2 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.359 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 Malloc3 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.359 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 Malloc4 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.359 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 Malloc5 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.359 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 Malloc6 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.359 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.359 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.359 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:14.359 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.359 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 Malloc7 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.620 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 Malloc8 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.620 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 Malloc9 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.620 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 Malloc10 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.620 22:21:09 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 Malloc11 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:14.620 22:21:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.620 22:21:09 -- common/autotest_common.sh@10 -- # set +x 00:22:14.620 22:21:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.620 22:21:09 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:14.620 22:21:09 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.620 22:21:09 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:15.998 22:21:10 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:15.998 22:21:10 -- common/autotest_common.sh@1177 -- # local i=0 00:22:15.998 22:21:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:15.998 22:21:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:15.998 22:21:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:17.902 22:21:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:17.902 22:21:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:17.902 22:21:12 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:17.902 22:21:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:17.902 22:21:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.902 22:21:12 -- common/autotest_common.sh@1187 -- # return 0 00:22:17.902 22:21:12 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.902 22:21:12 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:19.277 22:21:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:19.277 22:21:14 -- common/autotest_common.sh@1177 -- # local i=0 00:22:19.277 22:21:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:19.277 22:21:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:19.277 22:21:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:21.182 22:21:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:21.182 22:21:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:21.182 22:21:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:21.182 22:21:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:21.182 22:21:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:21.182 22:21:16 -- common/autotest_common.sh@1187 -- # return 0 00:22:21.182 22:21:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:21.182 22:21:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:22.559 22:21:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:22.559 22:21:17 -- common/autotest_common.sh@1177 -- # local i=0 00:22:22.559 22:21:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:22.559 22:21:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:22.559 22:21:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:24.461 22:21:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:24.461 22:21:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:24.461 22:21:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:24.461 22:21:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:24.461 22:21:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:24.461 22:21:19 -- common/autotest_common.sh@1187 -- # return 0 00:22:24.461 22:21:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:24.461 22:21:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:25.393 22:21:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:25.393 22:21:20 -- common/autotest_common.sh@1177 -- # local i=0 00:22:25.393 22:21:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:25.393 22:21:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:25.393 22:21:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:27.923 22:21:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:27.923 22:21:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:27.923 22:21:22 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:27.923 22:21:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:27.923 22:21:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:27.923 22:21:22 -- common/autotest_common.sh@1187 -- # return 0 00:22:27.923 22:21:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.923 22:21:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:28.856 22:21:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:28.857 22:21:23 -- common/autotest_common.sh@1177 -- # local i=0 00:22:28.857 22:21:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:28.857 22:21:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:28.857 22:21:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:30.760 22:21:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:30.760 22:21:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:30.760 22:21:25 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:30.760 22:21:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:30.760 22:21:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:30.760 22:21:25 -- common/autotest_common.sh@1187 -- # return 0 00:22:30.760 22:21:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:30.760 22:21:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:32.137 22:21:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:32.137 22:21:27 -- common/autotest_common.sh@1177 -- # local i=0 00:22:32.137 22:21:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:32.137 22:21:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:32.137 22:21:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:34.041 22:21:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:34.041 22:21:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:34.041 22:21:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:34.041 22:21:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:34.041 22:21:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:34.041 22:21:29 -- common/autotest_common.sh@1187 -- # return 0 00:22:34.041 22:21:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.041 22:21:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:35.419 22:21:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:35.419 22:21:30 -- common/autotest_common.sh@1177 -- # local i=0 00:22:35.419 22:21:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:35.419 22:21:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:35.419 22:21:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:37.397 22:21:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:37.397 22:21:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:37.397 22:21:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:37.397 22:21:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:37.397 22:21:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:37.397 22:21:32 -- common/autotest_common.sh@1187 -- # return 0 00:22:37.397 22:21:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:37.397 22:21:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:38.778 22:21:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:38.778 22:21:33 -- common/autotest_common.sh@1177 -- # local i=0 00:22:38.778 22:21:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:38.778 22:21:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:38.778 22:21:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:41.318 22:21:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:41.318 22:21:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:41.318 22:21:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:41.318 22:21:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:41.318 22:21:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:41.318 22:21:35 -- common/autotest_common.sh@1187 -- # return 0 00:22:41.318 22:21:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:41.318 22:21:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:42.259 22:21:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:42.259 22:21:37 -- common/autotest_common.sh@1177 -- # local i=0 00:22:42.259 22:21:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:42.259 22:21:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:42.259 22:21:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:44.168 22:21:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:44.168 22:21:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:44.168 22:21:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:44.168 22:21:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:44.168 22:21:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:44.168 22:21:39 -- common/autotest_common.sh@1187 -- # return 0 00:22:44.168 22:21:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:44.168 22:21:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:45.548 22:21:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:45.548 22:21:40 -- common/autotest_common.sh@1177 -- # local i=0 00:22:45.548 22:21:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:45.548 22:21:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:45.548 22:21:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:48.087 22:21:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:48.087 22:21:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:48.087 22:21:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:48.087 22:21:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:48.087 22:21:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:48.087 22:21:42 -- common/autotest_common.sh@1187 -- # return 0 00:22:48.087 22:21:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:48.087 22:21:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:49.477 22:21:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:49.477 22:21:44 -- common/autotest_common.sh@1177 -- # local i=0 00:22:49.477 22:21:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:49.477 22:21:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:49.477 22:21:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:51.384 22:21:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:51.384 22:21:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:51.384 22:21:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:51.384 22:21:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:51.384 22:21:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:51.384 22:21:46 -- common/autotest_common.sh@1187 -- # return 0 00:22:51.384 22:21:46 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:51.384 [global] 00:22:51.384 thread=1 00:22:51.384 invalidate=1 00:22:51.384 rw=read 00:22:51.384 time_based=1 00:22:51.384 runtime=10 00:22:51.384 ioengine=libaio 00:22:51.384 direct=1 00:22:51.384 bs=262144 00:22:51.384 iodepth=64 00:22:51.384 norandommap=1 00:22:51.384 numjobs=1 00:22:51.384 00:22:51.384 [job0] 00:22:51.384 filename=/dev/nvme0n1 00:22:51.384 [job1] 00:22:51.384 filename=/dev/nvme10n1 00:22:51.384 [job2] 00:22:51.384 filename=/dev/nvme1n1 00:22:51.384 [job3] 00:22:51.384 filename=/dev/nvme2n1 00:22:51.384 [job4] 00:22:51.384 filename=/dev/nvme3n1 00:22:51.384 [job5] 00:22:51.384 filename=/dev/nvme4n1 00:22:51.384 [job6] 00:22:51.384 filename=/dev/nvme5n1 00:22:51.384 [job7] 00:22:51.384 filename=/dev/nvme6n1 00:22:51.384 [job8] 00:22:51.384 filename=/dev/nvme7n1 00:22:51.384 [job9] 00:22:51.384 filename=/dev/nvme8n1 00:22:51.384 [job10] 00:22:51.384 filename=/dev/nvme9n1 00:22:51.384 Could not set queue depth (nvme0n1) 00:22:51.384 Could not set queue depth (nvme10n1) 00:22:51.384 Could not set queue depth (nvme1n1) 00:22:51.384 Could not set queue depth (nvme2n1) 00:22:51.384 Could not set queue depth (nvme3n1) 00:22:51.384 Could not set queue depth (nvme4n1) 00:22:51.384 Could not set queue depth (nvme5n1) 00:22:51.384 Could not set queue depth (nvme6n1) 00:22:51.384 Could not set queue depth (nvme7n1) 00:22:51.384 Could not set queue depth (nvme8n1) 00:22:51.384 Could not set queue depth (nvme9n1) 00:22:51.643 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:51.643 fio-3.35 00:22:51.643 Starting 11 threads 00:23:03.869 00:23:03.869 job0: (groupid=0, jobs=1): err= 0: pid=3624841: Wed Jul 24 22:21:57 2024 00:23:03.869 read: IOPS=633, BW=158MiB/s (166MB/s)(1589MiB/10039msec) 00:23:03.869 slat (usec): min=7, max=139229, avg=1157.10, stdev=6471.82 00:23:03.869 clat (msec): min=2, max=309, avg=99.84, stdev=53.55 00:23:03.869 lat (msec): min=2, max=319, avg=100.99, stdev=54.32 00:23:03.869 clat percentiles (msec): 00:23:03.869 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 36], 20.00th=[ 50], 00:23:03.869 | 30.00th=[ 64], 40.00th=[ 75], 50.00th=[ 94], 60.00th=[ 114], 00:23:03.869 | 70.00th=[ 130], 80.00th=[ 150], 90.00th=[ 178], 95.00th=[ 194], 00:23:03.869 | 99.00th=[ 213], 99.50th=[ 218], 99.90th=[ 279], 99.95th=[ 292], 00:23:03.869 | 99.99th=[ 309] 00:23:03.869 bw ( KiB/s): min=77824, max=310784, per=8.20%, avg=161100.80, stdev=60342.71, samples=20 00:23:03.869 iops : min= 304, max= 1214, avg=629.30, stdev=235.71, samples=20 00:23:03.869 lat (msec) : 4=0.20%, 10=1.20%, 20=3.29%, 50=16.02%, 100=32.98% 00:23:03.869 lat (msec) : 250=46.15%, 500=0.17% 00:23:03.869 cpu : usr=0.27%, sys=2.31%, ctx=1794, majf=0, minf=4097 00:23:03.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:03.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.869 issued rwts: total=6356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.869 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.869 job1: (groupid=0, jobs=1): err= 0: pid=3624854: Wed Jul 24 22:21:57 2024 00:23:03.869 read: IOPS=715, BW=179MiB/s (188MB/s)(1799MiB/10051msec) 00:23:03.869 slat (usec): min=8, max=113520, avg=843.74, stdev=5077.94 00:23:03.869 clat (msec): min=4, max=245, avg=88.49, stdev=51.87 00:23:03.869 lat (msec): min=4, max=283, avg=89.33, stdev=52.57 00:23:03.869 clat percentiles (msec): 00:23:03.869 | 1.00th=[ 14], 5.00th=[ 22], 10.00th=[ 28], 20.00th=[ 39], 00:23:03.869 | 30.00th=[ 53], 40.00th=[ 63], 50.00th=[ 79], 60.00th=[ 97], 00:23:03.869 | 70.00th=[ 118], 80.00th=[ 140], 90.00th=[ 163], 95.00th=[ 182], 00:23:03.869 | 99.00th=[ 215], 99.50th=[ 222], 99.90th=[ 239], 99.95th=[ 245], 00:23:03.869 | 99.99th=[ 247] 00:23:03.869 bw ( KiB/s): min=90624, max=300032, per=9.30%, avg=182546.60, stdev=62157.43, samples=20 00:23:03.869 iops : min= 354, max= 1172, avg=713.05, stdev=242.80, samples=20 00:23:03.869 lat (msec) : 10=0.25%, 20=3.46%, 50=24.51%, 100=33.75%, 250=38.03% 00:23:03.869 cpu : usr=0.28%, sys=2.38%, ctx=2000, majf=0, minf=4097 00:23:03.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:03.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.869 issued rwts: total=7194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.869 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.869 job2: (groupid=0, jobs=1): err= 0: pid=3624871: Wed Jul 24 22:21:57 2024 00:23:03.869 read: IOPS=722, BW=181MiB/s (189MB/s)(1809MiB/10022msec) 00:23:03.869 slat (usec): min=8, max=171524, avg=923.45, stdev=5250.98 00:23:03.869 clat (msec): min=2, max=292, avg=87.61, stdev=57.80 00:23:03.869 lat (msec): min=3, max=292, avg=88.54, stdev=58.47 00:23:03.869 clat percentiles (msec): 00:23:03.869 | 1.00th=[ 11], 5.00th=[ 16], 10.00th=[ 23], 20.00th=[ 35], 00:23:03.869 | 30.00th=[ 43], 40.00th=[ 53], 50.00th=[ 77], 60.00th=[ 101], 00:23:03.869 | 70.00th=[ 122], 80.00th=[ 140], 90.00th=[ 174], 95.00th=[ 194], 00:23:03.869 | 99.00th=[ 236], 99.50th=[ 249], 99.90th=[ 271], 99.95th=[ 271], 00:23:03.869 | 99.99th=[ 292] 00:23:03.869 bw ( KiB/s): min=65024, max=376320, per=9.35%, avg=183628.80, stdev=93833.69, samples=20 00:23:03.869 iops : min= 254, max= 1470, avg=717.30, stdev=366.54, samples=20 00:23:03.869 lat (msec) : 4=0.03%, 10=0.86%, 20=7.30%, 50=30.09%, 100=21.32% 00:23:03.869 lat (msec) : 250=39.95%, 500=0.46% 00:23:03.869 cpu : usr=0.28%, sys=2.55%, ctx=2024, majf=0, minf=4097 00:23:03.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:03.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.869 issued rwts: total=7236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.869 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.869 job3: (groupid=0, jobs=1): err= 0: pid=3624882: Wed Jul 24 22:21:57 2024 00:23:03.869 read: IOPS=666, BW=167MiB/s (175MB/s)(1687MiB/10126msec) 00:23:03.869 slat (usec): min=9, max=179325, avg=1141.50, stdev=5447.03 00:23:03.869 clat (msec): min=3, max=274, avg=94.81, stdev=53.03 00:23:03.869 lat (msec): min=3, max=295, avg=95.95, stdev=53.54 00:23:03.869 clat percentiles (msec): 00:23:03.869 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 45], 00:23:03.869 | 30.00th=[ 53], 40.00th=[ 65], 50.00th=[ 89], 60.00th=[ 112], 00:23:03.869 | 70.00th=[ 131], 80.00th=[ 146], 90.00th=[ 169], 95.00th=[ 184], 00:23:03.869 | 99.00th=[ 230], 99.50th=[ 234], 99.90th=[ 239], 99.95th=[ 255], 00:23:03.869 | 99.99th=[ 275] 00:23:03.869 bw ( KiB/s): min=104448, max=324096, per=8.71%, avg=171095.30, stdev=64162.96, samples=20 00:23:03.869 iops : min= 408, max= 1266, avg=668.30, stdev=250.68, samples=20 00:23:03.869 lat (msec) : 4=0.01%, 10=1.85%, 20=2.42%, 50=22.19%, 100=29.26% 00:23:03.869 lat (msec) : 250=44.20%, 500=0.07% 00:23:03.869 cpu : usr=0.23%, sys=2.47%, ctx=1660, majf=0, minf=4097 00:23:03.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:03.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.869 issued rwts: total=6747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.869 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.869 job4: (groupid=0, jobs=1): err= 0: pid=3624887: Wed Jul 24 22:21:57 2024 00:23:03.869 read: IOPS=562, BW=141MiB/s (148MB/s)(1428MiB/10148msec) 00:23:03.869 slat (usec): min=8, max=94473, avg=1343.86, stdev=6014.27 00:23:03.869 clat (msec): min=6, max=340, avg=112.21, stdev=54.16 00:23:03.869 lat (msec): min=6, max=340, avg=113.55, stdev=54.77 00:23:03.870 clat percentiles (msec): 00:23:03.870 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 35], 20.00th=[ 63], 00:23:03.870 | 30.00th=[ 83], 40.00th=[ 97], 50.00th=[ 110], 60.00th=[ 125], 00:23:03.870 | 70.00th=[ 144], 80.00th=[ 163], 90.00th=[ 186], 95.00th=[ 205], 00:23:03.870 | 99.00th=[ 226], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 284], 00:23:03.870 | 99.99th=[ 342] 00:23:03.870 bw ( KiB/s): min=90112, max=374272, per=7.36%, avg=144563.20, stdev=63536.72, samples=20 00:23:03.870 iops : min= 352, max= 1462, avg=564.70, stdev=248.19, samples=20 00:23:03.870 lat (msec) : 10=0.42%, 20=1.94%, 50=13.66%, 100=27.04%, 250=56.44% 00:23:03.870 lat (msec) : 500=0.49% 00:23:03.870 cpu : usr=0.20%, sys=1.86%, ctx=1312, majf=0, minf=4097 00:23:03.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:23:03.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.870 issued rwts: total=5710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.870 job5: (groupid=0, jobs=1): err= 0: pid=3624902: Wed Jul 24 22:21:57 2024 00:23:03.870 read: IOPS=753, BW=188MiB/s (198MB/s)(1913MiB/10154msec) 00:23:03.870 slat (usec): min=7, max=131105, avg=702.11, stdev=4596.08 00:23:03.870 clat (msec): min=2, max=298, avg=84.12, stdev=56.27 00:23:03.870 lat (msec): min=2, max=298, avg=84.82, stdev=56.78 00:23:03.870 clat percentiles (msec): 00:23:03.870 | 1.00th=[ 8], 5.00th=[ 15], 10.00th=[ 20], 20.00th=[ 30], 00:23:03.870 | 30.00th=[ 40], 40.00th=[ 58], 50.00th=[ 79], 60.00th=[ 94], 00:23:03.870 | 70.00th=[ 114], 80.00th=[ 132], 90.00th=[ 167], 95.00th=[ 188], 00:23:03.870 | 99.00th=[ 215], 99.50th=[ 253], 99.90th=[ 268], 99.95th=[ 268], 00:23:03.870 | 99.99th=[ 300] 00:23:03.870 bw ( KiB/s): min=106496, max=286208, per=9.89%, avg=194269.85, stdev=44619.89, samples=20 00:23:03.870 iops : min= 416, max= 1118, avg=758.85, stdev=174.31, samples=20 00:23:03.870 lat (msec) : 4=0.14%, 10=2.93%, 20=7.48%, 50=25.50%, 100=27.16% 00:23:03.870 lat (msec) : 250=35.96%, 500=0.84% 00:23:03.870 cpu : usr=0.18%, sys=2.36%, ctx=2387, majf=0, minf=4097 00:23:03.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:03.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.870 issued rwts: total=7652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.870 job6: (groupid=0, jobs=1): err= 0: pid=3624912: Wed Jul 24 22:21:57 2024 00:23:03.870 read: IOPS=753, BW=188MiB/s (197MB/s)(1904MiB/10109msec) 00:23:03.870 slat (usec): min=7, max=173997, avg=838.70, stdev=4872.68 00:23:03.870 clat (usec): min=1361, max=321262, avg=84024.63, stdev=53275.15 00:23:03.870 lat (usec): min=1391, max=321296, avg=84863.33, stdev=53910.10 00:23:03.870 clat percentiles (msec): 00:23:03.870 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 23], 20.00th=[ 37], 00:23:03.870 | 30.00th=[ 51], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 85], 00:23:03.870 | 70.00th=[ 106], 80.00th=[ 138], 90.00th=[ 165], 95.00th=[ 186], 00:23:03.870 | 99.00th=[ 220], 99.50th=[ 230], 99.90th=[ 245], 99.95th=[ 245], 00:23:03.870 | 99.99th=[ 321] 00:23:03.870 bw ( KiB/s): min=96768, max=374784, per=9.85%, avg=193356.80, stdev=72433.32, samples=20 00:23:03.870 iops : min= 378, max= 1464, avg=755.30, stdev=282.94, samples=20 00:23:03.870 lat (msec) : 2=0.04%, 4=0.21%, 10=3.02%, 20=4.83%, 50=21.66% 00:23:03.870 lat (msec) : 100=37.54%, 250=32.65%, 500=0.04% 00:23:03.870 cpu : usr=0.29%, sys=2.31%, ctx=2221, majf=0, minf=3347 00:23:03.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:03.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.870 issued rwts: total=7616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.870 job7: (groupid=0, jobs=1): err= 0: pid=3624920: Wed Jul 24 22:21:57 2024 00:23:03.870 read: IOPS=698, BW=175MiB/s (183MB/s)(1755MiB/10045msec) 00:23:03.870 slat (usec): min=7, max=137837, avg=885.15, stdev=4562.99 00:23:03.870 clat (msec): min=2, max=326, avg=90.60, stdev=48.28 00:23:03.870 lat (msec): min=2, max=326, avg=91.48, stdev=48.86 00:23:03.870 clat percentiles (msec): 00:23:03.870 | 1.00th=[ 11], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 50], 00:23:03.870 | 30.00th=[ 59], 40.00th=[ 72], 50.00th=[ 85], 60.00th=[ 96], 00:23:03.870 | 70.00th=[ 109], 80.00th=[ 125], 90.00th=[ 161], 95.00th=[ 182], 00:23:03.870 | 99.00th=[ 226], 99.50th=[ 275], 99.90th=[ 305], 99.95th=[ 309], 00:23:03.870 | 99.99th=[ 326] 00:23:03.870 bw ( KiB/s): min=79360, max=256512, per=9.07%, avg=178124.80, stdev=55675.68, samples=20 00:23:03.870 iops : min= 310, max= 1002, avg=695.80, stdev=217.48, samples=20 00:23:03.870 lat (msec) : 4=0.16%, 10=0.73%, 20=2.98%, 50=16.41%, 100=43.37% 00:23:03.870 lat (msec) : 250=35.78%, 500=0.58% 00:23:03.870 cpu : usr=0.20%, sys=2.30%, ctx=2063, majf=0, minf=4097 00:23:03.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:23:03.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.870 issued rwts: total=7021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.870 job8: (groupid=0, jobs=1): err= 0: pid=3624944: Wed Jul 24 22:21:57 2024 00:23:03.870 read: IOPS=730, BW=183MiB/s (192MB/s)(1846MiB/10109msec) 00:23:03.870 slat (usec): min=7, max=97874, avg=843.78, stdev=3896.21 00:23:03.870 clat (msec): min=2, max=251, avg=86.67, stdev=45.30 00:23:03.870 lat (msec): min=2, max=251, avg=87.51, stdev=45.73 00:23:03.870 clat percentiles (msec): 00:23:03.870 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 32], 20.00th=[ 44], 00:23:03.870 | 30.00th=[ 59], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 94], 00:23:03.870 | 70.00th=[ 109], 80.00th=[ 125], 90.00th=[ 146], 95.00th=[ 169], 00:23:03.870 | 99.00th=[ 211], 99.50th=[ 232], 99.90th=[ 243], 99.95th=[ 243], 00:23:03.870 | 99.99th=[ 253] 00:23:03.870 bw ( KiB/s): min=82432, max=310272, per=9.54%, avg=187417.60, stdev=64405.08, samples=20 00:23:03.870 iops : min= 322, max= 1212, avg=732.10, stdev=251.58, samples=20 00:23:03.870 lat (msec) : 4=0.03%, 10=1.15%, 20=3.82%, 50=18.46%, 100=41.27% 00:23:03.870 lat (msec) : 250=35.26%, 500=0.01% 00:23:03.870 cpu : usr=0.23%, sys=2.31%, ctx=2080, majf=0, minf=4097 00:23:03.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:03.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.870 issued rwts: total=7385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.870 job9: (groupid=0, jobs=1): err= 0: pid=3624954: Wed Jul 24 22:21:57 2024 00:23:03.870 read: IOPS=709, BW=177MiB/s (186MB/s)(1794MiB/10119msec) 00:23:03.870 slat (usec): min=8, max=179394, avg=878.72, stdev=4957.17 00:23:03.870 clat (usec): min=1548, max=267924, avg=89255.07, stdev=52083.72 00:23:03.870 lat (usec): min=1574, max=326620, avg=90133.79, stdev=52708.55 00:23:03.870 clat percentiles (msec): 00:23:03.870 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 33], 20.00th=[ 44], 00:23:03.870 | 30.00th=[ 53], 40.00th=[ 62], 50.00th=[ 79], 60.00th=[ 96], 00:23:03.870 | 70.00th=[ 115], 80.00th=[ 140], 90.00th=[ 163], 95.00th=[ 186], 00:23:03.870 | 99.00th=[ 226], 99.50th=[ 234], 99.90th=[ 245], 99.95th=[ 245], 00:23:03.870 | 99.99th=[ 268] 00:23:03.870 bw ( KiB/s): min=87040, max=300032, per=9.27%, avg=182113.30, stdev=64202.88, samples=20 00:23:03.870 iops : min= 340, max= 1172, avg=711.35, stdev=250.78, samples=20 00:23:03.870 lat (msec) : 2=0.06%, 4=0.15%, 10=0.92%, 20=3.20%, 50=22.81% 00:23:03.870 lat (msec) : 100=35.04%, 250=37.79%, 500=0.03% 00:23:03.870 cpu : usr=0.31%, sys=2.28%, ctx=2066, majf=0, minf=4097 00:23:03.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:23:03.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.870 issued rwts: total=7177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.870 job10: (groupid=0, jobs=1): err= 0: pid=3624963: Wed Jul 24 22:21:57 2024 00:23:03.870 read: IOPS=770, BW=193MiB/s (202MB/s)(1947MiB/10107msec) 00:23:03.870 slat (usec): min=9, max=101387, avg=846.13, stdev=3470.35 00:23:03.870 clat (msec): min=4, max=251, avg=82.14, stdev=38.32 00:23:03.870 lat (msec): min=4, max=251, avg=82.98, stdev=38.63 00:23:03.870 clat percentiles (msec): 00:23:03.870 | 1.00th=[ 18], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 49], 00:23:03.870 | 30.00th=[ 59], 40.00th=[ 70], 50.00th=[ 80], 60.00th=[ 88], 00:23:03.870 | 70.00th=[ 97], 80.00th=[ 110], 90.00th=[ 136], 95.00th=[ 157], 00:23:03.870 | 99.00th=[ 190], 99.50th=[ 201], 99.90th=[ 245], 99.95th=[ 251], 00:23:03.870 | 99.99th=[ 253] 00:23:03.870 bw ( KiB/s): min=141312, max=310784, per=10.07%, avg=197734.40, stdev=39509.33, samples=20 00:23:03.870 iops : min= 552, max= 1214, avg=772.40, stdev=154.33, samples=20 00:23:03.870 lat (msec) : 10=0.12%, 20=1.57%, 50=19.90%, 100=51.60%, 250=26.78% 00:23:03.870 lat (msec) : 500=0.04% 00:23:03.870 cpu : usr=0.22%, sys=2.75%, ctx=2051, majf=0, minf=4097 00:23:03.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:23:03.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:03.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:03.870 issued rwts: total=7787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:03.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:03.870 00:23:03.870 Run status group 0 (all jobs): 00:23:03.870 READ: bw=1917MiB/s (2011MB/s), 141MiB/s-193MiB/s (148MB/s-202MB/s), io=19.0GiB (20.4GB), run=10022-10154msec 00:23:03.870 00:23:03.870 Disk stats (read/write): 00:23:03.871 nvme0n1: ios=12420/0, merge=0/0, ticks=1236298/0, in_queue=1236298, util=97.28% 00:23:03.871 nvme10n1: ios=14203/0, merge=0/0, ticks=1236779/0, in_queue=1236779, util=97.46% 00:23:03.871 nvme1n1: ios=14175/0, merge=0/0, ticks=1238883/0, in_queue=1238883, util=97.75% 00:23:03.871 nvme2n1: ios=13079/0, merge=0/0, ticks=1227796/0, in_queue=1227796, util=97.86% 00:23:03.871 nvme3n1: ios=11234/0, merge=0/0, ticks=1225575/0, in_queue=1225575, util=97.87% 00:23:03.871 nvme4n1: ios=15176/0, merge=0/0, ticks=1240580/0, in_queue=1240580, util=98.26% 00:23:03.871 nvme5n1: ios=15103/0, merge=0/0, ticks=1235806/0, in_queue=1235806, util=98.45% 00:23:03.871 nvme6n1: ios=13755/0, merge=0/0, ticks=1239228/0, in_queue=1239228, util=98.56% 00:23:03.871 nvme7n1: ios=14598/0, merge=0/0, ticks=1233441/0, in_queue=1233441, util=98.94% 00:23:03.871 nvme8n1: ios=14211/0, merge=0/0, ticks=1235278/0, in_queue=1235278, util=99.11% 00:23:03.871 nvme9n1: ios=15356/0, merge=0/0, ticks=1229774/0, in_queue=1229774, util=99.23% 00:23:03.871 22:21:57 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:23:03.871 [global] 00:23:03.871 thread=1 00:23:03.871 invalidate=1 00:23:03.871 rw=randwrite 00:23:03.871 time_based=1 00:23:03.871 runtime=10 00:23:03.871 ioengine=libaio 00:23:03.871 direct=1 00:23:03.871 bs=262144 00:23:03.871 iodepth=64 00:23:03.871 norandommap=1 00:23:03.871 numjobs=1 00:23:03.871 00:23:03.871 [job0] 00:23:03.871 filename=/dev/nvme0n1 00:23:03.871 [job1] 00:23:03.871 filename=/dev/nvme10n1 00:23:03.871 [job2] 00:23:03.871 filename=/dev/nvme1n1 00:23:03.871 [job3] 00:23:03.871 filename=/dev/nvme2n1 00:23:03.871 [job4] 00:23:03.871 filename=/dev/nvme3n1 00:23:03.871 [job5] 00:23:03.871 filename=/dev/nvme4n1 00:23:03.871 [job6] 00:23:03.871 filename=/dev/nvme5n1 00:23:03.871 [job7] 00:23:03.871 filename=/dev/nvme6n1 00:23:03.871 [job8] 00:23:03.871 filename=/dev/nvme7n1 00:23:03.871 [job9] 00:23:03.871 filename=/dev/nvme8n1 00:23:03.871 [job10] 00:23:03.871 filename=/dev/nvme9n1 00:23:03.871 Could not set queue depth (nvme0n1) 00:23:03.871 Could not set queue depth (nvme10n1) 00:23:03.871 Could not set queue depth (nvme1n1) 00:23:03.871 Could not set queue depth (nvme2n1) 00:23:03.871 Could not set queue depth (nvme3n1) 00:23:03.871 Could not set queue depth (nvme4n1) 00:23:03.871 Could not set queue depth (nvme5n1) 00:23:03.871 Could not set queue depth (nvme6n1) 00:23:03.871 Could not set queue depth (nvme7n1) 00:23:03.871 Could not set queue depth (nvme8n1) 00:23:03.871 Could not set queue depth (nvme9n1) 00:23:03.871 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:23:03.871 fio-3.35 00:23:03.871 Starting 11 threads 00:23:13.887 00:23:13.887 job0: (groupid=0, jobs=1): err= 0: pid=3626560: Wed Jul 24 22:22:08 2024 00:23:13.887 write: IOPS=459, BW=115MiB/s (120MB/s)(1164MiB/10134msec); 0 zone resets 00:23:13.887 slat (usec): min=20, max=52200, avg=1598.43, stdev=4085.19 00:23:13.887 clat (msec): min=8, max=279, avg=137.38, stdev=49.41 00:23:13.887 lat (msec): min=8, max=280, avg=138.98, stdev=49.97 00:23:13.887 clat percentiles (msec): 00:23:13.887 | 1.00th=[ 27], 5.00th=[ 54], 10.00th=[ 72], 20.00th=[ 94], 00:23:13.887 | 30.00th=[ 116], 40.00th=[ 130], 50.00th=[ 140], 60.00th=[ 150], 00:23:13.887 | 70.00th=[ 163], 80.00th=[ 176], 90.00th=[ 199], 95.00th=[ 224], 00:23:13.887 | 99.00th=[ 257], 99.50th=[ 259], 99.90th=[ 275], 99.95th=[ 279], 00:23:13.887 | 99.99th=[ 279] 00:23:13.887 bw ( KiB/s): min=78179, max=190464, per=9.12%, avg=117564.40, stdev=26963.75, samples=20 00:23:13.887 iops : min= 305, max= 744, avg=459.20, stdev=105.38, samples=20 00:23:13.887 lat (msec) : 10=0.02%, 20=0.39%, 50=4.04%, 100=17.74%, 250=76.23% 00:23:13.887 lat (msec) : 500=1.59% 00:23:13.887 cpu : usr=1.03%, sys=1.39%, ctx=2306, majf=0, minf=1 00:23:13.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:13.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.887 issued rwts: total=0,4657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.887 job1: (groupid=0, jobs=1): err= 0: pid=3626562: Wed Jul 24 22:22:08 2024 00:23:13.887 write: IOPS=486, BW=122MiB/s (128MB/s)(1229MiB/10106msec); 0 zone resets 00:23:13.887 slat (usec): min=20, max=123660, avg=1233.44, stdev=4431.34 00:23:13.887 clat (msec): min=3, max=380, avg=130.29, stdev=59.32 00:23:13.887 lat (msec): min=3, max=380, avg=131.52, stdev=60.12 00:23:13.887 clat percentiles (msec): 00:23:13.887 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 56], 20.00th=[ 78], 00:23:13.887 | 30.00th=[ 96], 40.00th=[ 110], 50.00th=[ 128], 60.00th=[ 144], 00:23:13.887 | 70.00th=[ 161], 80.00th=[ 178], 90.00th=[ 209], 95.00th=[ 236], 00:23:13.887 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 342], 99.95th=[ 376], 00:23:13.887 | 99.99th=[ 380] 00:23:13.887 bw ( KiB/s): min=73728, max=177152, per=9.64%, avg=124217.70, stdev=33789.11, samples=20 00:23:13.887 iops : min= 288, max= 692, avg=485.20, stdev=132.01, samples=20 00:23:13.887 lat (msec) : 4=0.04%, 10=0.26%, 20=1.57%, 50=5.96%, 100=25.31% 00:23:13.887 lat (msec) : 250=63.18%, 500=3.68% 00:23:13.887 cpu : usr=0.95%, sys=1.50%, ctx=3037, majf=0, minf=1 00:23:13.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:13.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.887 issued rwts: total=0,4916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.887 job2: (groupid=0, jobs=1): err= 0: pid=3626563: Wed Jul 24 22:22:08 2024 00:23:13.887 write: IOPS=452, BW=113MiB/s (119MB/s)(1146MiB/10131msec); 0 zone resets 00:23:13.887 slat (usec): min=21, max=166688, avg=1448.77, stdev=5596.18 00:23:13.887 clat (msec): min=6, max=332, avg=139.94, stdev=59.03 00:23:13.887 lat (msec): min=6, max=332, avg=141.38, stdev=59.68 00:23:13.887 clat percentiles (msec): 00:23:13.887 | 1.00th=[ 22], 5.00th=[ 41], 10.00th=[ 62], 20.00th=[ 90], 00:23:13.887 | 30.00th=[ 109], 40.00th=[ 124], 50.00th=[ 140], 60.00th=[ 157], 00:23:13.887 | 70.00th=[ 167], 80.00th=[ 182], 90.00th=[ 218], 95.00th=[ 253], 00:23:13.887 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 321], 99.95th=[ 326], 00:23:13.887 | 99.99th=[ 334] 00:23:13.887 bw ( KiB/s): min=75776, max=155648, per=8.98%, avg=115688.25, stdev=23773.14, samples=20 00:23:13.887 iops : min= 296, max= 608, avg=451.85, stdev=92.82, samples=20 00:23:13.887 lat (msec) : 10=0.07%, 20=0.72%, 50=6.11%, 100=18.13%, 250=69.61% 00:23:13.887 lat (msec) : 500=5.37% 00:23:13.887 cpu : usr=1.19%, sys=1.41%, ctx=2643, majf=0, minf=1 00:23:13.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:13.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.888 issued rwts: total=0,4583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.888 job3: (groupid=0, jobs=1): err= 0: pid=3626564: Wed Jul 24 22:22:08 2024 00:23:13.888 write: IOPS=391, BW=97.8MiB/s (103MB/s)(992MiB/10147msec); 0 zone resets 00:23:13.888 slat (usec): min=20, max=657936, avg=1766.45, stdev=13387.76 00:23:13.888 clat (msec): min=4, max=2031, avg=161.83, stdev=217.89 00:23:13.888 lat (msec): min=4, max=2125, avg=163.60, stdev=220.25 00:23:13.888 clat percentiles (msec): 00:23:13.888 | 1.00th=[ 24], 5.00th=[ 46], 10.00th=[ 67], 20.00th=[ 91], 00:23:13.888 | 30.00th=[ 113], 40.00th=[ 122], 50.00th=[ 131], 60.00th=[ 142], 00:23:13.888 | 70.00th=[ 155], 80.00th=[ 174], 90.00th=[ 203], 95.00th=[ 249], 00:23:13.888 | 99.00th=[ 1787], 99.50th=[ 1821], 99.90th=[ 2022], 99.95th=[ 2022], 00:23:13.888 | 99.99th=[ 2039] 00:23:13.888 bw ( KiB/s): min= 2048, max=168622, per=7.75%, avg=99921.80, stdev=48900.87, samples=20 00:23:13.888 iops : min= 8, max= 658, avg=390.25, stdev=191.02, samples=20 00:23:13.888 lat (msec) : 10=0.20%, 20=0.40%, 50=5.70%, 100=16.73%, 250=72.18% 00:23:13.888 lat (msec) : 500=2.49%, 750=0.45%, 1000=0.25%, 2000=1.39%, >=2000=0.20% 00:23:13.888 cpu : usr=0.88%, sys=1.03%, ctx=2263, majf=0, minf=1 00:23:13.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:23:13.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.888 issued rwts: total=0,3968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.888 job4: (groupid=0, jobs=1): err= 0: pid=3626565: Wed Jul 24 22:22:08 2024 00:23:13.888 write: IOPS=594, BW=149MiB/s (156MB/s)(1504MiB/10126msec); 0 zone resets 00:23:13.888 slat (usec): min=19, max=83015, avg=1157.36, stdev=3608.71 00:23:13.888 clat (msec): min=3, max=392, avg=106.51, stdev=59.81 00:23:13.888 lat (msec): min=3, max=392, avg=107.67, stdev=60.35 00:23:13.888 clat percentiles (msec): 00:23:13.888 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 46], 20.00th=[ 67], 00:23:13.888 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 103], 00:23:13.888 | 70.00th=[ 130], 80.00th=[ 155], 90.00th=[ 190], 95.00th=[ 215], 00:23:13.888 | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 342], 99.95th=[ 347], 00:23:13.888 | 99.99th=[ 393] 00:23:13.888 bw ( KiB/s): min=56320, max=232448, per=11.82%, avg=152394.35, stdev=56632.35, samples=20 00:23:13.888 iops : min= 220, max= 908, avg=595.20, stdev=221.25, samples=20 00:23:13.888 lat (msec) : 4=0.02%, 10=0.17%, 20=2.31%, 50=9.37%, 100=46.57% 00:23:13.888 lat (msec) : 250=38.04%, 500=3.52% 00:23:13.888 cpu : usr=1.48%, sys=1.66%, ctx=3096, majf=0, minf=1 00:23:13.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:23:13.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.888 issued rwts: total=0,6017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.888 job5: (groupid=0, jobs=1): err= 0: pid=3626577: Wed Jul 24 22:22:08 2024 00:23:13.888 write: IOPS=394, BW=98.6MiB/s (103MB/s)(994MiB/10086msec); 0 zone resets 00:23:13.888 slat (usec): min=23, max=120186, avg=2077.84, stdev=6364.44 00:23:13.888 clat (msec): min=4, max=596, avg=160.02, stdev=73.49 00:23:13.888 lat (msec): min=7, max=596, avg=162.10, stdev=74.40 00:23:13.888 clat percentiles (msec): 00:23:13.888 | 1.00th=[ 27], 5.00th=[ 62], 10.00th=[ 88], 20.00th=[ 112], 00:23:13.888 | 30.00th=[ 129], 40.00th=[ 142], 50.00th=[ 155], 60.00th=[ 167], 00:23:13.888 | 70.00th=[ 178], 80.00th=[ 192], 90.00th=[ 226], 95.00th=[ 275], 00:23:13.888 | 99.00th=[ 523], 99.50th=[ 535], 99.90th=[ 584], 99.95th=[ 600], 00:23:13.888 | 99.99th=[ 600] 00:23:13.888 bw ( KiB/s): min=32768, max=135168, per=7.77%, avg=100189.95, stdev=29042.67, samples=20 00:23:13.888 iops : min= 128, max= 528, avg=391.35, stdev=113.46, samples=20 00:23:13.888 lat (msec) : 10=0.08%, 20=0.33%, 50=2.51%, 100=10.99%, 250=78.88% 00:23:13.888 lat (msec) : 500=5.93%, 750=1.28% 00:23:13.888 cpu : usr=1.00%, sys=1.22%, ctx=1711, majf=0, minf=1 00:23:13.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:23:13.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.888 issued rwts: total=0,3977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.888 job6: (groupid=0, jobs=1): err= 0: pid=3626578: Wed Jul 24 22:22:08 2024 00:23:13.888 write: IOPS=360, BW=90.0MiB/s (94.4MB/s)(910MiB/10108msec); 0 zone resets 00:23:13.888 slat (usec): min=23, max=906873, avg=2426.81, stdev=18890.09 00:23:13.888 clat (msec): min=31, max=2369, avg=174.94, stdev=265.86 00:23:13.888 lat (msec): min=31, max=2369, avg=177.37, stdev=268.69 00:23:13.888 clat percentiles (msec): 00:23:13.888 | 1.00th=[ 54], 5.00th=[ 71], 10.00th=[ 75], 20.00th=[ 88], 00:23:13.888 | 30.00th=[ 97], 40.00th=[ 109], 50.00th=[ 130], 60.00th=[ 140], 00:23:13.888 | 70.00th=[ 155], 80.00th=[ 171], 90.00th=[ 215], 95.00th=[ 317], 00:23:13.888 | 99.00th=[ 1989], 99.50th=[ 2232], 99.90th=[ 2333], 99.95th=[ 2366], 00:23:13.888 | 99.99th=[ 2366] 00:23:13.888 bw ( KiB/s): min= 2560, max=182784, per=7.48%, avg=96381.79, stdev=53419.04, samples=19 00:23:13.888 iops : min= 10, max= 714, avg=376.42, stdev=208.76, samples=19 00:23:13.888 lat (msec) : 50=0.74%, 100=31.90%, 250=59.45%, 500=5.00%, 750=0.80% 00:23:13.888 lat (msec) : 1000=0.27%, 2000=1.02%, >=2000=0.82% 00:23:13.888 cpu : usr=1.11%, sys=0.94%, ctx=1453, majf=0, minf=1 00:23:13.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:23:13.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.888 issued rwts: total=0,3640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.888 job7: (groupid=0, jobs=1): err= 0: pid=3626579: Wed Jul 24 22:22:08 2024 00:23:13.888 write: IOPS=534, BW=134MiB/s (140MB/s)(1356MiB/10136msec); 0 zone resets 00:23:13.888 slat (usec): min=18, max=152425, avg=1118.44, stdev=4818.59 00:23:13.888 clat (msec): min=5, max=367, avg=118.48, stdev=61.16 00:23:13.888 lat (msec): min=5, max=438, avg=119.59, stdev=61.90 00:23:13.888 clat percentiles (msec): 00:23:13.888 | 1.00th=[ 22], 5.00th=[ 38], 10.00th=[ 56], 20.00th=[ 75], 00:23:13.888 | 30.00th=[ 85], 40.00th=[ 94], 50.00th=[ 107], 60.00th=[ 116], 00:23:13.888 | 70.00th=[ 134], 80.00th=[ 155], 90.00th=[ 197], 95.00th=[ 241], 00:23:13.888 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 368], 99.95th=[ 368], 00:23:13.888 | 99.99th=[ 368] 00:23:13.888 bw ( KiB/s): min=67960, max=203264, per=10.64%, avg=137170.35, stdev=40066.42, samples=20 00:23:13.888 iops : min= 265, max= 794, avg=535.75, stdev=156.56, samples=20 00:23:13.888 lat (msec) : 10=0.02%, 20=0.70%, 50=7.62%, 100=35.10%, 250=52.12% 00:23:13.888 lat (msec) : 500=4.44% 00:23:13.888 cpu : usr=1.14%, sys=1.61%, ctx=3358, majf=0, minf=1 00:23:13.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:23:13.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.888 issued rwts: total=0,5422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.888 job8: (groupid=0, jobs=1): err= 0: pid=3626580: Wed Jul 24 22:22:08 2024 00:23:13.888 write: IOPS=462, BW=116MiB/s (121MB/s)(1166MiB/10092msec); 0 zone resets 00:23:13.888 slat (usec): min=17, max=83535, avg=1400.09, stdev=3860.49 00:23:13.888 clat (msec): min=9, max=310, avg=137.04, stdev=50.98 00:23:13.888 lat (msec): min=9, max=310, avg=138.44, stdev=51.65 00:23:13.888 clat percentiles (msec): 00:23:13.888 | 1.00th=[ 24], 5.00th=[ 45], 10.00th=[ 64], 20.00th=[ 97], 00:23:13.888 | 30.00th=[ 116], 40.00th=[ 132], 50.00th=[ 140], 60.00th=[ 150], 00:23:13.888 | 70.00th=[ 159], 80.00th=[ 174], 90.00th=[ 194], 95.00th=[ 222], 00:23:13.888 | 99.00th=[ 284], 99.50th=[ 292], 99.90th=[ 305], 99.95th=[ 309], 00:23:13.888 | 99.99th=[ 309] 00:23:13.888 bw ( KiB/s): min=74752, max=198656, per=9.14%, avg=117761.85, stdev=23905.25, samples=20 00:23:13.888 iops : min= 292, max= 776, avg=460.00, stdev=93.38, samples=20 00:23:13.888 lat (msec) : 10=0.02%, 20=0.30%, 50=6.22%, 100=15.33%, 250=75.62% 00:23:13.888 lat (msec) : 500=2.51% 00:23:13.888 cpu : usr=1.13%, sys=1.19%, ctx=2714, majf=0, minf=1 00:23:13.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:13.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.888 issued rwts: total=0,4664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.888 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.888 job9: (groupid=0, jobs=1): err= 0: pid=3626581: Wed Jul 24 22:22:08 2024 00:23:13.888 write: IOPS=585, BW=146MiB/s (153MB/s)(1482MiB/10131msec); 0 zone resets 00:23:13.888 slat (usec): min=18, max=91341, avg=908.29, stdev=3611.62 00:23:13.888 clat (usec): min=1933, max=344843, avg=108406.38, stdev=56694.53 00:23:13.888 lat (usec): min=1981, max=426705, avg=109314.67, stdev=57376.44 00:23:13.888 clat percentiles (msec): 00:23:13.888 | 1.00th=[ 14], 5.00th=[ 28], 10.00th=[ 43], 20.00th=[ 67], 00:23:13.888 | 30.00th=[ 81], 40.00th=[ 88], 50.00th=[ 101], 60.00th=[ 112], 00:23:13.888 | 70.00th=[ 126], 80.00th=[ 142], 90.00th=[ 186], 95.00th=[ 228], 00:23:13.888 | 99.00th=[ 296], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 338], 00:23:13.888 | 99.99th=[ 347] 00:23:13.888 bw ( KiB/s): min=86016, max=217088, per=11.65%, avg=150139.40, stdev=35913.64, samples=20 00:23:13.888 iops : min= 336, max= 848, avg=586.40, stdev=140.29, samples=20 00:23:13.888 lat (msec) : 2=0.02%, 4=0.02%, 10=0.19%, 20=2.21%, 50=10.15% 00:23:13.888 lat (msec) : 100=37.29%, 250=46.99%, 500=3.14% 00:23:13.888 cpu : usr=1.21%, sys=1.73%, ctx=4023, majf=0, minf=1 00:23:13.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:23:13.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.889 issued rwts: total=0,5929,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.889 job10: (groupid=0, jobs=1): err= 0: pid=3626583: Wed Jul 24 22:22:08 2024 00:23:13.889 write: IOPS=326, BW=81.6MiB/s (85.6MB/s)(828MiB/10145msec); 0 zone resets 00:23:13.889 slat (usec): min=18, max=562950, avg=2300.43, stdev=13595.53 00:23:13.889 clat (msec): min=11, max=1178, avg=193.34, stdev=149.62 00:23:13.889 lat (msec): min=11, max=1178, avg=195.64, stdev=151.30 00:23:13.889 clat percentiles (msec): 00:23:13.889 | 1.00th=[ 24], 5.00th=[ 61], 10.00th=[ 82], 20.00th=[ 104], 00:23:13.889 | 30.00th=[ 120], 40.00th=[ 140], 50.00th=[ 159], 60.00th=[ 186], 00:23:13.889 | 70.00th=[ 213], 80.00th=[ 243], 90.00th=[ 309], 95.00th=[ 430], 00:23:13.889 | 99.00th=[ 986], 99.50th=[ 1116], 99.90th=[ 1167], 99.95th=[ 1183], 00:23:13.889 | 99.99th=[ 1183] 00:23:13.889 bw ( KiB/s): min= 6144, max=149716, per=6.45%, avg=83151.20, stdev=38980.71, samples=20 00:23:13.889 iops : min= 24, max= 584, avg=324.75, stdev=152.19, samples=20 00:23:13.889 lat (msec) : 20=0.57%, 50=2.96%, 100=13.67%, 250=64.99%, 500=13.79% 00:23:13.889 lat (msec) : 750=2.11%, 1000=1.33%, 2000=0.57% 00:23:13.889 cpu : usr=1.04%, sys=0.92%, ctx=1666, majf=0, minf=1 00:23:13.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:23:13.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:13.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:23:13.889 issued rwts: total=0,3313,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:13.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:23:13.889 00:23:13.889 Run status group 0 (all jobs): 00:23:13.889 WRITE: bw=1259MiB/s (1320MB/s), 81.6MiB/s-149MiB/s (85.6MB/s-156MB/s), io=12.5GiB (13.4GB), run=10086-10147msec 00:23:13.889 00:23:13.889 Disk stats (read/write): 00:23:13.889 nvme0n1: ios=52/9089, merge=0/0, ticks=1139/1212318, in_queue=1213457, util=99.42% 00:23:13.889 nvme10n1: ios=49/9648, merge=0/0, ticks=129/1223407, in_queue=1223536, util=97.90% 00:23:13.889 nvme1n1: ios=43/8975, merge=0/0, ticks=2317/1204795, in_queue=1207112, util=99.58% 00:23:13.889 nvme2n1: ios=49/7791, merge=0/0, ticks=57/1218639, in_queue=1218696, util=98.01% 00:23:13.889 nvme3n1: ios=49/11853, merge=0/0, ticks=116/1219836, in_queue=1219952, util=98.25% 00:23:13.889 nvme4n1: ios=44/7780, merge=0/0, ticks=2615/1209241, in_queue=1211856, util=99.91% 00:23:13.889 nvme5n1: ios=46/7034, merge=0/0, ticks=2706/1160037, in_queue=1162743, util=100.00% 00:23:13.889 nvme6n1: ios=0/10652, merge=0/0, ticks=0/1213145, in_queue=1213145, util=98.37% 00:23:13.889 nvme7n1: ios=0/9141, merge=0/0, ticks=0/1221771, in_queue=1221771, util=98.75% 00:23:13.889 nvme8n1: ios=0/11651, merge=0/0, ticks=0/1225132, in_queue=1225132, util=98.92% 00:23:13.889 nvme9n1: ios=0/6489, merge=0/0, ticks=0/1207850, in_queue=1207850, util=99.05% 00:23:13.889 22:22:08 -- target/multiconnection.sh@36 -- # sync 00:23:13.889 22:22:08 -- target/multiconnection.sh@37 -- # seq 1 11 00:23:13.889 22:22:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.889 22:22:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:13.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:13.889 22:22:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:23:13.889 22:22:08 -- common/autotest_common.sh@1198 -- # local i=0 00:23:13.889 22:22:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:13.889 22:22:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:23:13.889 22:22:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:13.889 22:22:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:23:13.889 22:22:08 -- common/autotest_common.sh@1210 -- # return 0 00:23:13.889 22:22:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:13.889 22:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:13.889 22:22:08 -- common/autotest_common.sh@10 -- # set +x 00:23:13.889 22:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:13.889 22:22:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.889 22:22:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:13.889 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:13.889 22:22:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:13.889 22:22:08 -- common/autotest_common.sh@1198 -- # local i=0 00:23:13.889 22:22:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:13.889 22:22:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:23:13.889 22:22:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:13.889 22:22:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:23:13.889 22:22:08 -- common/autotest_common.sh@1210 -- # return 0 00:23:13.889 22:22:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:13.889 22:22:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:13.889 22:22:08 -- common/autotest_common.sh@10 -- # set +x 00:23:13.889 22:22:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:13.889 22:22:08 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:13.889 22:22:08 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:14.149 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:14.149 22:22:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:14.149 22:22:09 -- common/autotest_common.sh@1198 -- # local i=0 00:23:14.149 22:22:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:14.149 22:22:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:23:14.149 22:22:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:14.149 22:22:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:23:14.149 22:22:09 -- common/autotest_common.sh@1210 -- # return 0 00:23:14.149 22:22:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:14.149 22:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.149 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:23:14.149 22:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.149 22:22:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:14.149 22:22:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:14.409 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:14.409 22:22:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:14.409 22:22:09 -- common/autotest_common.sh@1198 -- # local i=0 00:23:14.409 22:22:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:14.409 22:22:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:23:14.409 22:22:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:14.409 22:22:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:23:14.409 22:22:09 -- common/autotest_common.sh@1210 -- # return 0 00:23:14.409 22:22:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:14.409 22:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.409 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:23:14.409 22:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.409 22:22:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:14.409 22:22:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:14.669 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:14.669 22:22:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:14.669 22:22:09 -- common/autotest_common.sh@1198 -- # local i=0 00:23:14.669 22:22:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:14.669 22:22:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:23:14.669 22:22:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:14.669 22:22:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:23:14.669 22:22:09 -- common/autotest_common.sh@1210 -- # return 0 00:23:14.669 22:22:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:14.669 22:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.669 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:23:14.669 22:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.669 22:22:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:14.669 22:22:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:14.928 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:14.928 22:22:09 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:14.928 22:22:09 -- common/autotest_common.sh@1198 -- # local i=0 00:23:14.928 22:22:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:14.928 22:22:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:23:14.928 22:22:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:23:14.928 22:22:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:14.928 22:22:09 -- common/autotest_common.sh@1210 -- # return 0 00:23:14.928 22:22:09 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:14.928 22:22:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:14.928 22:22:09 -- common/autotest_common.sh@10 -- # set +x 00:23:14.928 22:22:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:14.928 22:22:09 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:14.928 22:22:09 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:15.188 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:15.188 22:22:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:15.188 22:22:10 -- common/autotest_common.sh@1198 -- # local i=0 00:23:15.188 22:22:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:15.188 22:22:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:23:15.188 22:22:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:15.188 22:22:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:23:15.188 22:22:10 -- common/autotest_common.sh@1210 -- # return 0 00:23:15.188 22:22:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:15.188 22:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:15.188 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:23:15.188 22:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:15.188 22:22:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:15.188 22:22:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:15.448 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:15.448 22:22:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:15.448 22:22:10 -- common/autotest_common.sh@1198 -- # local i=0 00:23:15.448 22:22:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:15.448 22:22:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:23:15.448 22:22:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:15.448 22:22:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:23:15.448 22:22:10 -- common/autotest_common.sh@1210 -- # return 0 00:23:15.448 22:22:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:15.448 22:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:15.448 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:23:15.448 22:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:15.448 22:22:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:15.448 22:22:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:15.448 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:15.448 22:22:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:15.448 22:22:10 -- common/autotest_common.sh@1198 -- # local i=0 00:23:15.448 22:22:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:15.448 22:22:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:23:15.448 22:22:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:15.448 22:22:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:23:15.448 22:22:10 -- common/autotest_common.sh@1210 -- # return 0 00:23:15.448 22:22:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:15.448 22:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:15.448 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:23:15.448 22:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:15.448 22:22:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:15.448 22:22:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:15.709 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:15.709 22:22:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:15.709 22:22:10 -- common/autotest_common.sh@1198 -- # local i=0 00:23:15.709 22:22:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:15.709 22:22:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:23:15.709 22:22:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:15.709 22:22:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:23:15.709 22:22:10 -- common/autotest_common.sh@1210 -- # return 0 00:23:15.709 22:22:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:15.709 22:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:15.709 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:23:15.709 22:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:15.709 22:22:10 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:15.709 22:22:10 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:15.709 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:15.709 22:22:10 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:15.709 22:22:10 -- common/autotest_common.sh@1198 -- # local i=0 00:23:15.709 22:22:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:15.709 22:22:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:23:15.709 22:22:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:15.709 22:22:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:23:15.709 22:22:10 -- common/autotest_common.sh@1210 -- # return 0 00:23:15.709 22:22:10 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:15.709 22:22:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:15.709 22:22:10 -- common/autotest_common.sh@10 -- # set +x 00:23:15.709 22:22:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:15.709 22:22:10 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:15.709 22:22:10 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:15.709 22:22:10 -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:15.709 22:22:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:15.709 22:22:10 -- nvmf/common.sh@116 -- # sync 00:23:15.709 22:22:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:15.709 22:22:10 -- nvmf/common.sh@119 -- # set +e 00:23:15.709 22:22:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:15.709 22:22:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:15.969 rmmod nvme_tcp 00:23:15.969 rmmod nvme_fabrics 00:23:15.969 rmmod nvme_keyring 00:23:15.969 22:22:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:15.969 22:22:10 -- nvmf/common.sh@123 -- # set -e 00:23:15.969 22:22:10 -- nvmf/common.sh@124 -- # return 0 00:23:15.969 22:22:10 -- nvmf/common.sh@477 -- # '[' -n 3617961 ']' 00:23:15.969 22:22:10 -- nvmf/common.sh@478 -- # killprocess 3617961 00:23:15.969 22:22:10 -- common/autotest_common.sh@926 -- # '[' -z 3617961 ']' 00:23:15.969 22:22:10 -- common/autotest_common.sh@930 -- # kill -0 3617961 00:23:15.969 22:22:10 -- common/autotest_common.sh@931 -- # uname 00:23:15.969 22:22:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:15.969 22:22:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3617961 00:23:15.969 22:22:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:15.969 22:22:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:15.969 22:22:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3617961' 00:23:15.969 killing process with pid 3617961 00:23:15.969 22:22:10 -- common/autotest_common.sh@945 -- # kill 3617961 00:23:15.969 22:22:10 -- common/autotest_common.sh@950 -- # wait 3617961 00:23:16.229 22:22:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:16.229 22:22:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:16.229 22:22:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:16.229 22:22:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.229 22:22:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:16.229 22:22:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.229 22:22:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.229 22:22:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.770 22:22:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:18.770 00:23:18.770 real 1m10.310s 00:23:18.770 user 4m12.615s 00:23:18.770 sys 0m20.803s 00:23:18.770 22:22:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.770 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:23:18.770 ************************************ 00:23:18.770 END TEST nvmf_multiconnection 00:23:18.770 ************************************ 00:23:18.770 22:22:13 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:18.770 22:22:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:18.770 22:22:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:18.770 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:23:18.770 ************************************ 00:23:18.770 START TEST nvmf_initiator_timeout 00:23:18.770 ************************************ 00:23:18.770 22:22:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:23:18.770 * Looking for test storage... 00:23:18.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:18.770 22:22:13 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.770 22:22:13 -- nvmf/common.sh@7 -- # uname -s 00:23:18.770 22:22:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.770 22:22:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.770 22:22:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.770 22:22:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.770 22:22:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.770 22:22:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.770 22:22:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.770 22:22:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.770 22:22:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.770 22:22:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.770 22:22:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.770 22:22:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:18.770 22:22:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.770 22:22:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.770 22:22:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.770 22:22:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.770 22:22:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.770 22:22:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.770 22:22:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.770 22:22:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.770 22:22:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.770 22:22:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.770 22:22:13 -- paths/export.sh@5 -- # export PATH 00:23:18.770 22:22:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.770 22:22:13 -- nvmf/common.sh@46 -- # : 0 00:23:18.770 22:22:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:18.770 22:22:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:18.770 22:22:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:18.770 22:22:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.770 22:22:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.770 22:22:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:18.770 22:22:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:18.770 22:22:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:18.770 22:22:13 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:18.770 22:22:13 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:18.770 22:22:13 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:18.770 22:22:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:18.770 22:22:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.770 22:22:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:18.770 22:22:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:18.770 22:22:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:18.770 22:22:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.770 22:22:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:18.770 22:22:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.770 22:22:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:18.770 22:22:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:18.770 22:22:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:18.770 22:22:13 -- common/autotest_common.sh@10 -- # set +x 00:23:24.051 22:22:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:24.051 22:22:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:24.051 22:22:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:24.051 22:22:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:24.051 22:22:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:24.051 22:22:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:24.051 22:22:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:24.051 22:22:18 -- nvmf/common.sh@294 -- # net_devs=() 00:23:24.051 22:22:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:24.051 22:22:18 -- nvmf/common.sh@295 -- # e810=() 00:23:24.051 22:22:18 -- nvmf/common.sh@295 -- # local -ga e810 00:23:24.051 22:22:18 -- nvmf/common.sh@296 -- # x722=() 00:23:24.051 22:22:18 -- nvmf/common.sh@296 -- # local -ga x722 00:23:24.051 22:22:18 -- nvmf/common.sh@297 -- # mlx=() 00:23:24.051 22:22:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:24.051 22:22:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.051 22:22:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:24.051 22:22:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:24.051 22:22:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:24.051 22:22:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:24.051 22:22:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:24.051 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:24.051 22:22:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:24.051 22:22:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:24.051 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:24.051 22:22:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:24.051 22:22:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:24.051 22:22:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:24.051 22:22:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.051 22:22:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:24.051 22:22:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.051 22:22:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:24.051 Found net devices under 0000:86:00.0: cvl_0_0 00:23:24.051 22:22:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.051 22:22:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:24.051 22:22:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.051 22:22:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:24.051 22:22:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.051 22:22:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:24.051 Found net devices under 0000:86:00.1: cvl_0_1 00:23:24.051 22:22:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.051 22:22:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:24.052 22:22:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:24.052 22:22:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:24.052 22:22:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:24.052 22:22:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:24.052 22:22:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.052 22:22:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.052 22:22:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.052 22:22:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:24.052 22:22:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.052 22:22:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.052 22:22:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:24.052 22:22:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.052 22:22:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.052 22:22:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:24.052 22:22:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:24.052 22:22:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.052 22:22:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.052 22:22:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.052 22:22:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.052 22:22:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:24.052 22:22:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.052 22:22:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.052 22:22:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.052 22:22:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:24.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:23:24.052 00:23:24.052 --- 10.0.0.2 ping statistics --- 00:23:24.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.052 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:23:24.052 22:22:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.539 ms 00:23:24.052 00:23:24.052 --- 10.0.0.1 ping statistics --- 00:23:24.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.052 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:23:24.052 22:22:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.052 22:22:19 -- nvmf/common.sh@410 -- # return 0 00:23:24.052 22:22:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:24.052 22:22:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.052 22:22:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:24.312 22:22:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:24.313 22:22:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.313 22:22:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:24.313 22:22:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:24.313 22:22:19 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:24.313 22:22:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:24.313 22:22:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:24.313 22:22:19 -- common/autotest_common.sh@10 -- # set +x 00:23:24.313 22:22:19 -- nvmf/common.sh@469 -- # nvmfpid=3631818 00:23:24.313 22:22:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:24.313 22:22:19 -- nvmf/common.sh@470 -- # waitforlisten 3631818 00:23:24.313 22:22:19 -- common/autotest_common.sh@819 -- # '[' -z 3631818 ']' 00:23:24.313 22:22:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.313 22:22:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:24.313 22:22:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.313 22:22:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:24.313 22:22:19 -- common/autotest_common.sh@10 -- # set +x 00:23:24.313 [2024-07-24 22:22:19.251730] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:23:24.313 [2024-07-24 22:22:19.251770] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.313 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.313 [2024-07-24 22:22:19.311647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:24.313 [2024-07-24 22:22:19.350945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:24.313 [2024-07-24 22:22:19.351066] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.313 [2024-07-24 22:22:19.351075] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.313 [2024-07-24 22:22:19.351082] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.313 [2024-07-24 22:22:19.351126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.313 [2024-07-24 22:22:19.351144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.313 [2024-07-24 22:22:19.351231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.313 [2024-07-24 22:22:19.351232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.252 22:22:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:25.252 22:22:20 -- common/autotest_common.sh@852 -- # return 0 00:23:25.252 22:22:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:25.252 22:22:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:25.252 22:22:20 -- common/autotest_common.sh@10 -- # set +x 00:23:25.252 22:22:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.252 22:22:20 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:25.252 22:22:20 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:25.252 22:22:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.252 22:22:20 -- common/autotest_common.sh@10 -- # set +x 00:23:25.252 Malloc0 00:23:25.252 22:22:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.252 22:22:20 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:25.252 22:22:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.252 22:22:20 -- common/autotest_common.sh@10 -- # set +x 00:23:25.252 Delay0 00:23:25.252 22:22:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.252 22:22:20 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:25.252 22:22:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.252 22:22:20 -- common/autotest_common.sh@10 -- # set +x 00:23:25.252 [2024-07-24 22:22:20.128732] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.252 22:22:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.252 22:22:20 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:25.252 22:22:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.252 22:22:20 -- common/autotest_common.sh@10 -- # set +x 00:23:25.252 22:22:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.252 22:22:20 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:25.252 22:22:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.252 22:22:20 -- common/autotest_common.sh@10 -- # set +x 00:23:25.252 22:22:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.252 22:22:20 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.252 22:22:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:25.252 22:22:20 -- common/autotest_common.sh@10 -- # set +x 00:23:25.252 [2024-07-24 22:22:20.153825] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.252 22:22:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:25.252 22:22:20 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:26.190 22:22:21 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:26.190 22:22:21 -- common/autotest_common.sh@1177 -- # local i=0 00:23:26.190 22:22:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:26.190 22:22:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:26.190 22:22:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:28.730 22:22:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:28.730 22:22:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:28.730 22:22:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:28.730 22:22:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:28.730 22:22:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:28.730 22:22:23 -- common/autotest_common.sh@1187 -- # return 0 00:23:28.730 22:22:23 -- target/initiator_timeout.sh@35 -- # fio_pid=3632547 00:23:28.730 22:22:23 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:28.730 22:22:23 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:28.730 [global] 00:23:28.730 thread=1 00:23:28.730 invalidate=1 00:23:28.730 rw=write 00:23:28.730 time_based=1 00:23:28.730 runtime=60 00:23:28.730 ioengine=libaio 00:23:28.730 direct=1 00:23:28.730 bs=4096 00:23:28.730 iodepth=1 00:23:28.730 norandommap=0 00:23:28.730 numjobs=1 00:23:28.730 00:23:28.730 verify_dump=1 00:23:28.730 verify_backlog=512 00:23:28.730 verify_state_save=0 00:23:28.730 do_verify=1 00:23:28.730 verify=crc32c-intel 00:23:28.730 [job0] 00:23:28.730 filename=/dev/nvme0n1 00:23:28.730 Could not set queue depth (nvme0n1) 00:23:28.730 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:28.730 fio-3.35 00:23:28.730 Starting 1 thread 00:23:31.268 22:22:26 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:31.268 22:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.268 22:22:26 -- common/autotest_common.sh@10 -- # set +x 00:23:31.268 true 00:23:31.268 22:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.268 22:22:26 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:31.268 22:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.268 22:22:26 -- common/autotest_common.sh@10 -- # set +x 00:23:31.268 true 00:23:31.268 22:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.268 22:22:26 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:31.268 22:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.268 22:22:26 -- common/autotest_common.sh@10 -- # set +x 00:23:31.268 true 00:23:31.268 22:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.268 22:22:26 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:31.268 22:22:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.268 22:22:26 -- common/autotest_common.sh@10 -- # set +x 00:23:31.268 true 00:23:31.268 22:22:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.268 22:22:26 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:34.560 22:22:29 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:34.560 22:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.560 22:22:29 -- common/autotest_common.sh@10 -- # set +x 00:23:34.560 true 00:23:34.560 22:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.560 22:22:29 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:34.560 22:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.560 22:22:29 -- common/autotest_common.sh@10 -- # set +x 00:23:34.560 true 00:23:34.560 22:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.560 22:22:29 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:34.560 22:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.560 22:22:29 -- common/autotest_common.sh@10 -- # set +x 00:23:34.560 true 00:23:34.560 22:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.560 22:22:29 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:34.560 22:22:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:34.560 22:22:29 -- common/autotest_common.sh@10 -- # set +x 00:23:34.560 true 00:23:34.560 22:22:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:34.560 22:22:29 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:34.560 22:22:29 -- target/initiator_timeout.sh@54 -- # wait 3632547 00:24:30.844 00:24:30.844 job0: (groupid=0, jobs=1): err= 0: pid=3632690: Wed Jul 24 22:23:23 2024 00:24:30.844 read: IOPS=57, BW=229KiB/s (235kB/s)(13.4MiB/60042msec) 00:24:30.844 slat (usec): min=6, max=15717, avg=17.75, stdev=334.28 00:24:30.844 clat (usec): min=381, max=41493k, avg=17111.88, stdev=707679.66 00:24:30.844 lat (usec): min=388, max=41493k, avg=17129.63, stdev=707679.64 00:24:30.844 clat percentiles (usec): 00:24:30.844 | 1.00th=[ 400], 5.00th=[ 494], 10.00th=[ 562], 00:24:30.844 | 20.00th=[ 594], 30.00th=[ 603], 40.00th=[ 619], 00:24:30.844 | 50.00th=[ 627], 60.00th=[ 644], 70.00th=[ 725], 00:24:30.844 | 80.00th=[ 914], 90.00th=[ 41681], 95.00th=[ 42206], 00:24:30.844 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 43254], 00:24:30.844 | 99.95th=[ 44827], 99.99th=[17112761] 00:24:30.844 write: IOPS=59, BW=239KiB/s (244kB/s)(14.0MiB/60042msec); 0 zone resets 00:24:30.844 slat (nsec): min=10234, max=50651, avg=11879.38, stdev=2197.59 00:24:30.844 clat (usec): min=235, max=901, avg=301.96, stdev=74.11 00:24:30.844 lat (usec): min=246, max=930, avg=313.84, stdev=74.76 00:24:30.844 clat percentiles (usec): 00:24:30.844 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 265], 00:24:30.844 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:24:30.844 | 70.00th=[ 293], 80.00th=[ 314], 90.00th=[ 363], 95.00th=[ 437], 00:24:30.844 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 783], 99.95th=[ 807], 00:24:30.844 | 99.99th=[ 906] 00:24:30.844 bw ( KiB/s): min= 2712, max= 4976, per=100.00%, avg=4096.00, stdev=785.86, samples=7 00:24:30.844 iops : min= 678, max= 1244, avg=1024.00, stdev=196.47, samples=7 00:24:30.844 lat (usec) : 250=1.34%, 500=50.91%, 750=34.18%, 1000=5.70% 00:24:30.844 lat (msec) : 2=2.66%, 10=0.01%, 50=5.18%, >=2000=0.01% 00:24:30.844 cpu : usr=0.14%, sys=0.17%, ctx=7025, majf=0, minf=2 00:24:30.844 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:30.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.844 issued rwts: total=3438,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.844 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:30.844 00:24:30.844 Run status group 0 (all jobs): 00:24:30.844 READ: bw=229KiB/s (235kB/s), 229KiB/s-229KiB/s (235kB/s-235kB/s), io=13.4MiB (14.1MB), run=60042-60042msec 00:24:30.844 WRITE: bw=239KiB/s (244kB/s), 239KiB/s-239KiB/s (244kB/s-244kB/s), io=14.0MiB (14.7MB), run=60042-60042msec 00:24:30.844 00:24:30.844 Disk stats (read/write): 00:24:30.844 nvme0n1: ios=3533/3584, merge=0/0, ticks=17314/1032, in_queue=18346, util=99.60% 00:24:30.844 22:23:23 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:30.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:30.844 22:23:23 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:30.844 22:23:23 -- common/autotest_common.sh@1198 -- # local i=0 00:24:30.844 22:23:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:30.844 22:23:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:30.844 22:23:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:30.844 22:23:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:30.844 22:23:23 -- common/autotest_common.sh@1210 -- # return 0 00:24:30.844 22:23:23 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:30.844 22:23:23 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:30.844 nvmf hotplug test: fio successful as expected 00:24:30.844 22:23:23 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.844 22:23:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:30.844 22:23:23 -- common/autotest_common.sh@10 -- # set +x 00:24:30.844 22:23:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:30.844 22:23:23 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:30.844 22:23:23 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:30.844 22:23:23 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:30.844 22:23:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:30.844 22:23:23 -- nvmf/common.sh@116 -- # sync 00:24:30.844 22:23:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:30.844 22:23:23 -- nvmf/common.sh@119 -- # set +e 00:24:30.844 22:23:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:30.844 22:23:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:30.844 rmmod nvme_tcp 00:24:30.844 rmmod nvme_fabrics 00:24:30.844 rmmod nvme_keyring 00:24:30.844 22:23:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:30.844 22:23:23 -- nvmf/common.sh@123 -- # set -e 00:24:30.844 22:23:23 -- nvmf/common.sh@124 -- # return 0 00:24:30.844 22:23:23 -- nvmf/common.sh@477 -- # '[' -n 3631818 ']' 00:24:30.844 22:23:23 -- nvmf/common.sh@478 -- # killprocess 3631818 00:24:30.844 22:23:23 -- common/autotest_common.sh@926 -- # '[' -z 3631818 ']' 00:24:30.844 22:23:23 -- common/autotest_common.sh@930 -- # kill -0 3631818 00:24:30.844 22:23:23 -- common/autotest_common.sh@931 -- # uname 00:24:30.844 22:23:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:30.844 22:23:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3631818 00:24:30.844 22:23:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:30.844 22:23:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:30.844 22:23:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3631818' 00:24:30.844 killing process with pid 3631818 00:24:30.844 22:23:23 -- common/autotest_common.sh@945 -- # kill 3631818 00:24:30.844 22:23:23 -- common/autotest_common.sh@950 -- # wait 3631818 00:24:30.844 22:23:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:30.844 22:23:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:30.844 22:23:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:30.844 22:23:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.844 22:23:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:30.844 22:23:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.844 22:23:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.844 22:23:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.104 22:23:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:31.104 00:24:31.104 real 1m12.757s 00:24:31.104 user 4m24.568s 00:24:31.105 sys 0m6.057s 00:24:31.105 22:23:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:31.105 22:23:26 -- common/autotest_common.sh@10 -- # set +x 00:24:31.105 ************************************ 00:24:31.105 END TEST nvmf_initiator_timeout 00:24:31.105 ************************************ 00:24:31.364 22:23:26 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:31.364 22:23:26 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:31.364 22:23:26 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:31.364 22:23:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:31.364 22:23:26 -- common/autotest_common.sh@10 -- # set +x 00:24:36.655 22:23:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:36.655 22:23:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:36.655 22:23:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:36.655 22:23:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:36.655 22:23:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:36.655 22:23:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:36.655 22:23:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:36.655 22:23:30 -- nvmf/common.sh@294 -- # net_devs=() 00:24:36.655 22:23:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:36.655 22:23:30 -- nvmf/common.sh@295 -- # e810=() 00:24:36.655 22:23:30 -- nvmf/common.sh@295 -- # local -ga e810 00:24:36.655 22:23:30 -- nvmf/common.sh@296 -- # x722=() 00:24:36.655 22:23:30 -- nvmf/common.sh@296 -- # local -ga x722 00:24:36.655 22:23:30 -- nvmf/common.sh@297 -- # mlx=() 00:24:36.655 22:23:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:36.655 22:23:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.655 22:23:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:36.656 22:23:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:36.656 22:23:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:36.656 22:23:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:36.656 22:23:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:36.656 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:36.656 22:23:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:36.656 22:23:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:36.656 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:36.656 22:23:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:36.656 22:23:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:36.656 22:23:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.656 22:23:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:36.656 22:23:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.656 22:23:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:36.656 Found net devices under 0000:86:00.0: cvl_0_0 00:24:36.656 22:23:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.656 22:23:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:36.656 22:23:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.656 22:23:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:36.656 22:23:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.656 22:23:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:36.656 Found net devices under 0000:86:00.1: cvl_0_1 00:24:36.656 22:23:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.656 22:23:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:36.656 22:23:30 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.656 22:23:30 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:36.656 22:23:30 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:36.656 22:23:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:36.656 22:23:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:36.656 22:23:30 -- common/autotest_common.sh@10 -- # set +x 00:24:36.656 ************************************ 00:24:36.656 START TEST nvmf_perf_adq 00:24:36.656 ************************************ 00:24:36.656 22:23:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:36.656 * Looking for test storage... 00:24:36.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:36.656 22:23:30 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.656 22:23:30 -- nvmf/common.sh@7 -- # uname -s 00:24:36.656 22:23:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.656 22:23:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.656 22:23:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.656 22:23:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.656 22:23:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.656 22:23:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.656 22:23:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.656 22:23:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.656 22:23:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.656 22:23:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.656 22:23:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.656 22:23:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:36.656 22:23:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.656 22:23:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.656 22:23:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.656 22:23:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.656 22:23:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.656 22:23:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.656 22:23:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.656 22:23:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.656 22:23:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.656 22:23:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.656 22:23:30 -- paths/export.sh@5 -- # export PATH 00:24:36.656 22:23:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.656 22:23:30 -- nvmf/common.sh@46 -- # : 0 00:24:36.656 22:23:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:36.656 22:23:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:36.656 22:23:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:36.656 22:23:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.656 22:23:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.656 22:23:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:36.656 22:23:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:36.656 22:23:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:36.656 22:23:30 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:36.656 22:23:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:36.656 22:23:30 -- common/autotest_common.sh@10 -- # set +x 00:24:40.848 22:23:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:40.848 22:23:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:40.848 22:23:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:40.848 22:23:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:40.848 22:23:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:40.848 22:23:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:40.848 22:23:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:40.848 22:23:35 -- nvmf/common.sh@294 -- # net_devs=() 00:24:40.848 22:23:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:40.848 22:23:35 -- nvmf/common.sh@295 -- # e810=() 00:24:40.848 22:23:35 -- nvmf/common.sh@295 -- # local -ga e810 00:24:40.848 22:23:35 -- nvmf/common.sh@296 -- # x722=() 00:24:40.848 22:23:35 -- nvmf/common.sh@296 -- # local -ga x722 00:24:40.848 22:23:35 -- nvmf/common.sh@297 -- # mlx=() 00:24:40.848 22:23:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:40.848 22:23:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.848 22:23:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:40.848 22:23:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:40.848 22:23:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:40.848 22:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.848 22:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:40.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:40.848 22:23:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.848 22:23:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:40.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:40.848 22:23:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:40.848 22:23:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:40.848 22:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.848 22:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.849 22:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.849 22:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.849 22:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:40.849 Found net devices under 0000:86:00.0: cvl_0_0 00:24:40.849 22:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.849 22:23:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.849 22:23:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.849 22:23:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:41.109 22:23:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.109 22:23:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:41.109 Found net devices under 0000:86:00.1: cvl_0_1 00:24:41.109 22:23:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.109 22:23:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:41.109 22:23:35 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.109 22:23:35 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:41.109 22:23:35 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:41.109 22:23:35 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:41.109 22:23:35 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:42.048 22:23:37 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:43.959 22:23:38 -- target/perf_adq.sh@54 -- # sleep 5 00:24:49.243 22:23:43 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:49.243 22:23:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:49.243 22:23:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.243 22:23:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:49.244 22:23:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:49.244 22:23:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:49.244 22:23:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.244 22:23:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.244 22:23:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.244 22:23:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:49.244 22:23:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:49.244 22:23:43 -- common/autotest_common.sh@10 -- # set +x 00:24:49.244 22:23:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:49.244 22:23:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:49.244 22:23:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:49.244 22:23:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:49.244 22:23:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:49.244 22:23:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:49.244 22:23:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:49.244 22:23:43 -- nvmf/common.sh@294 -- # net_devs=() 00:24:49.244 22:23:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:49.244 22:23:43 -- nvmf/common.sh@295 -- # e810=() 00:24:49.244 22:23:43 -- nvmf/common.sh@295 -- # local -ga e810 00:24:49.244 22:23:43 -- nvmf/common.sh@296 -- # x722=() 00:24:49.244 22:23:43 -- nvmf/common.sh@296 -- # local -ga x722 00:24:49.244 22:23:43 -- nvmf/common.sh@297 -- # mlx=() 00:24:49.244 22:23:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:49.244 22:23:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.244 22:23:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:49.244 22:23:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:49.244 22:23:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:49.244 22:23:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:49.244 22:23:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:49.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:49.244 22:23:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:49.244 22:23:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:49.244 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:49.244 22:23:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:49.244 22:23:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:49.244 22:23:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.244 22:23:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:49.244 22:23:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.244 22:23:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:49.244 Found net devices under 0000:86:00.0: cvl_0_0 00:24:49.244 22:23:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.244 22:23:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:49.244 22:23:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.244 22:23:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:49.244 22:23:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.244 22:23:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:49.244 Found net devices under 0000:86:00.1: cvl_0_1 00:24:49.244 22:23:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.244 22:23:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:49.244 22:23:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:49.244 22:23:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:49.244 22:23:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:49.244 22:23:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.244 22:23:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.244 22:23:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.244 22:23:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:49.244 22:23:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.244 22:23:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.244 22:23:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:49.244 22:23:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.244 22:23:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.244 22:23:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:49.244 22:23:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:49.244 22:23:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.244 22:23:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.244 22:23:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.244 22:23:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.244 22:23:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:49.244 22:23:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.244 22:23:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.244 22:23:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.244 22:23:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:49.244 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.244 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:24:49.244 00:24:49.244 --- 10.0.0.2 ping statistics --- 00:24:49.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.244 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:49.244 22:23:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:24:49.244 00:24:49.244 --- 10.0.0.1 ping statistics --- 00:24:49.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.244 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:24:49.244 22:23:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.244 22:23:44 -- nvmf/common.sh@410 -- # return 0 00:24:49.244 22:23:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:49.244 22:23:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.244 22:23:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:49.244 22:23:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:49.244 22:23:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.244 22:23:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:49.244 22:23:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:49.244 22:23:44 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:49.244 22:23:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:49.244 22:23:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:49.244 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.244 22:23:44 -- nvmf/common.sh@469 -- # nvmfpid=3650261 00:24:49.244 22:23:44 -- nvmf/common.sh@470 -- # waitforlisten 3650261 00:24:49.244 22:23:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:49.244 22:23:44 -- common/autotest_common.sh@819 -- # '[' -z 3650261 ']' 00:24:49.244 22:23:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.244 22:23:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:49.244 22:23:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.244 22:23:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:49.244 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.244 [2024-07-24 22:23:44.118759] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:24:49.244 [2024-07-24 22:23:44.118800] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.244 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.244 [2024-07-24 22:23:44.177440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.244 [2024-07-24 22:23:44.216184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:49.244 [2024-07-24 22:23:44.216306] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.244 [2024-07-24 22:23:44.216314] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.244 [2024-07-24 22:23:44.216325] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.244 [2024-07-24 22:23:44.216363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.244 [2024-07-24 22:23:44.216449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.244 [2024-07-24 22:23:44.216540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:49.244 [2024-07-24 22:23:44.216541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.245 22:23:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:49.245 22:23:44 -- common/autotest_common.sh@852 -- # return 0 00:24:49.245 22:23:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:49.245 22:23:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:49.245 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.245 22:23:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.245 22:23:44 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:49.245 22:23:44 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:49.245 22:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.245 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.245 22:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.245 22:23:44 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:49.245 22:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.245 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.504 22:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.504 22:23:44 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:49.504 22:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.504 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.504 [2024-07-24 22:23:44.405332] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.504 22:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.504 22:23:44 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:49.504 22:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.504 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.504 Malloc1 00:24:49.504 22:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.504 22:23:44 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:49.504 22:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.504 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.504 22:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.504 22:23:44 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:49.504 22:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.504 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.504 22:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.504 22:23:44 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.504 22:23:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:49.504 22:23:44 -- common/autotest_common.sh@10 -- # set +x 00:24:49.504 [2024-07-24 22:23:44.452996] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.504 22:23:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:49.504 22:23:44 -- target/perf_adq.sh@73 -- # perfpid=3650475 00:24:49.504 22:23:44 -- target/perf_adq.sh@74 -- # sleep 2 00:24:49.504 22:23:44 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:49.504 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.474 22:23:46 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:51.474 22:23:46 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:51.474 22:23:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:51.474 22:23:46 -- target/perf_adq.sh@76 -- # wc -l 00:24:51.474 22:23:46 -- common/autotest_common.sh@10 -- # set +x 00:24:51.474 22:23:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:51.474 22:23:46 -- target/perf_adq.sh@76 -- # count=4 00:24:51.474 22:23:46 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:51.474 22:23:46 -- target/perf_adq.sh@81 -- # wait 3650475 00:24:59.593 Initializing NVMe Controllers 00:24:59.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:59.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:59.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:59.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:59.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:59.593 Initialization complete. Launching workers. 00:24:59.593 ======================================================== 00:24:59.593 Latency(us) 00:24:59.593 Device Information : IOPS MiB/s Average min max 00:24:59.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10946.00 42.76 5847.00 1547.08 10127.65 00:24:59.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11396.40 44.52 5617.08 1523.54 11968.70 00:24:59.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11424.20 44.63 5602.57 1337.39 12043.25 00:24:59.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10037.20 39.21 6377.34 2214.57 20702.16 00:24:59.593 ======================================================== 00:24:59.593 Total : 43803.79 171.11 5844.96 1337.39 20702.16 00:24:59.593 00:24:59.593 22:23:54 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:59.593 22:23:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:59.593 22:23:54 -- nvmf/common.sh@116 -- # sync 00:24:59.593 22:23:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:59.593 22:23:54 -- nvmf/common.sh@119 -- # set +e 00:24:59.593 22:23:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:59.593 22:23:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:59.593 rmmod nvme_tcp 00:24:59.593 rmmod nvme_fabrics 00:24:59.593 rmmod nvme_keyring 00:24:59.593 22:23:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:59.593 22:23:54 -- nvmf/common.sh@123 -- # set -e 00:24:59.593 22:23:54 -- nvmf/common.sh@124 -- # return 0 00:24:59.593 22:23:54 -- nvmf/common.sh@477 -- # '[' -n 3650261 ']' 00:24:59.593 22:23:54 -- nvmf/common.sh@478 -- # killprocess 3650261 00:24:59.593 22:23:54 -- common/autotest_common.sh@926 -- # '[' -z 3650261 ']' 00:24:59.593 22:23:54 -- common/autotest_common.sh@930 -- # kill -0 3650261 00:24:59.593 22:23:54 -- common/autotest_common.sh@931 -- # uname 00:24:59.593 22:23:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:59.593 22:23:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3650261 00:24:59.593 22:23:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:59.593 22:23:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:59.593 22:23:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3650261' 00:24:59.593 killing process with pid 3650261 00:24:59.593 22:23:54 -- common/autotest_common.sh@945 -- # kill 3650261 00:24:59.593 22:23:54 -- common/autotest_common.sh@950 -- # wait 3650261 00:24:59.853 22:23:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:59.853 22:23:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:59.853 22:23:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:59.853 22:23:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.853 22:23:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:59.853 22:23:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.853 22:23:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.853 22:23:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.392 22:23:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:02.392 22:23:56 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:25:02.392 22:23:56 -- target/perf_adq.sh@52 -- # rmmod ice 00:25:03.332 22:23:58 -- target/perf_adq.sh@53 -- # modprobe ice 00:25:04.713 22:23:59 -- target/perf_adq.sh@54 -- # sleep 5 00:25:09.994 22:24:04 -- target/perf_adq.sh@87 -- # nvmftestinit 00:25:09.994 22:24:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:09.994 22:24:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.994 22:24:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:09.994 22:24:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:09.994 22:24:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:09.994 22:24:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.994 22:24:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.994 22:24:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.994 22:24:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:09.994 22:24:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:09.994 22:24:04 -- common/autotest_common.sh@10 -- # set +x 00:25:09.994 22:24:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:09.994 22:24:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:09.994 22:24:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:09.994 22:24:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:09.994 22:24:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:09.994 22:24:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:09.994 22:24:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:09.994 22:24:04 -- nvmf/common.sh@294 -- # net_devs=() 00:25:09.994 22:24:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:09.994 22:24:04 -- nvmf/common.sh@295 -- # e810=() 00:25:09.994 22:24:04 -- nvmf/common.sh@295 -- # local -ga e810 00:25:09.994 22:24:04 -- nvmf/common.sh@296 -- # x722=() 00:25:09.994 22:24:04 -- nvmf/common.sh@296 -- # local -ga x722 00:25:09.994 22:24:04 -- nvmf/common.sh@297 -- # mlx=() 00:25:09.994 22:24:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:09.994 22:24:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.994 22:24:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:09.994 22:24:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:09.994 22:24:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:09.994 22:24:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:09.994 22:24:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:09.994 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:09.994 22:24:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:09.994 22:24:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:09.994 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:09.994 22:24:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:09.994 22:24:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:09.994 22:24:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.994 22:24:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:09.994 22:24:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.994 22:24:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:09.994 Found net devices under 0000:86:00.0: cvl_0_0 00:25:09.994 22:24:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.994 22:24:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:09.994 22:24:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.994 22:24:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:09.994 22:24:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.994 22:24:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:09.994 Found net devices under 0000:86:00.1: cvl_0_1 00:25:09.994 22:24:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.994 22:24:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:09.994 22:24:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:09.994 22:24:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:09.994 22:24:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:09.994 22:24:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.994 22:24:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.994 22:24:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.994 22:24:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:09.994 22:24:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.994 22:24:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.994 22:24:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:09.994 22:24:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.994 22:24:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.994 22:24:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:09.994 22:24:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:09.994 22:24:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.994 22:24:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.994 22:24:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.994 22:24:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.994 22:24:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:09.994 22:24:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.994 22:24:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.994 22:24:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.994 22:24:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:09.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:25:09.994 00:25:09.994 --- 10.0.0.2 ping statistics --- 00:25:09.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.994 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:25:09.994 22:24:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:25:10.255 00:25:10.255 --- 10.0.0.1 ping statistics --- 00:25:10.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.255 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:25:10.255 22:24:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.255 22:24:05 -- nvmf/common.sh@410 -- # return 0 00:25:10.255 22:24:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:10.255 22:24:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.255 22:24:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:10.255 22:24:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:10.255 22:24:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.255 22:24:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:10.255 22:24:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:10.255 22:24:05 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:25:10.255 22:24:05 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:25:10.255 22:24:05 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:25:10.255 22:24:05 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:25:10.255 net.core.busy_poll = 1 00:25:10.255 22:24:05 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:25:10.255 net.core.busy_read = 1 00:25:10.255 22:24:05 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:25:10.255 22:24:05 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:25:10.255 22:24:05 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:25:10.255 22:24:05 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:25:10.255 22:24:05 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:25:10.255 22:24:05 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:10.255 22:24:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:10.255 22:24:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:10.255 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.255 22:24:05 -- nvmf/common.sh@469 -- # nvmfpid=3654223 00:25:10.255 22:24:05 -- nvmf/common.sh@470 -- # waitforlisten 3654223 00:25:10.255 22:24:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:10.255 22:24:05 -- common/autotest_common.sh@819 -- # '[' -z 3654223 ']' 00:25:10.255 22:24:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.255 22:24:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:10.255 22:24:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.255 22:24:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:10.255 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.516 [2024-07-24 22:24:05.436856] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:10.516 [2024-07-24 22:24:05.436909] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.516 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.516 [2024-07-24 22:24:05.494764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:10.516 [2024-07-24 22:24:05.534975] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:10.516 [2024-07-24 22:24:05.535104] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.516 [2024-07-24 22:24:05.535114] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.516 [2024-07-24 22:24:05.535120] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.516 [2024-07-24 22:24:05.535170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.516 [2024-07-24 22:24:05.535271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.516 [2024-07-24 22:24:05.535357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:10.516 [2024-07-24 22:24:05.535358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.516 22:24:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:10.516 22:24:05 -- common/autotest_common.sh@852 -- # return 0 00:25:10.516 22:24:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:10.516 22:24:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:10.516 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.516 22:24:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.516 22:24:05 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:25:10.516 22:24:05 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:25:10.516 22:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.516 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.516 22:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.516 22:24:05 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:25:10.516 22:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.516 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.775 22:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.775 22:24:05 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:25:10.775 22:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.775 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.775 [2024-07-24 22:24:05.711760] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.775 22:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.775 22:24:05 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:10.775 22:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.775 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.775 Malloc1 00:25:10.775 22:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.775 22:24:05 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.775 22:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.775 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.775 22:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.775 22:24:05 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:10.775 22:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.775 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.775 22:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.775 22:24:05 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.775 22:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:10.775 22:24:05 -- common/autotest_common.sh@10 -- # set +x 00:25:10.775 [2024-07-24 22:24:05.755702] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.775 22:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:10.775 22:24:05 -- target/perf_adq.sh@94 -- # perfpid=3654322 00:25:10.775 22:24:05 -- target/perf_adq.sh@95 -- # sleep 2 00:25:10.775 22:24:05 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:10.775 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.679 22:24:07 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:25:12.679 22:24:07 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:25:12.679 22:24:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.679 22:24:07 -- target/perf_adq.sh@97 -- # wc -l 00:25:12.679 22:24:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.679 22:24:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.679 22:24:07 -- target/perf_adq.sh@97 -- # count=3 00:25:12.679 22:24:07 -- target/perf_adq.sh@98 -- # [[ 3 -lt 2 ]] 00:25:12.679 22:24:07 -- target/perf_adq.sh@103 -- # wait 3654322 00:25:20.867 Initializing NVMe Controllers 00:25:20.867 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:20.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:25:20.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:25:20.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:25:20.867 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:25:20.867 Initialization complete. Launching workers. 00:25:20.867 ======================================================== 00:25:20.867 Latency(us) 00:25:20.867 Device Information : IOPS MiB/s Average min max 00:25:20.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6056.86 23.66 10578.64 2048.48 55530.11 00:25:20.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6118.96 23.90 10458.92 1768.85 55305.65 00:25:20.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6225.76 24.32 10294.29 1790.92 56041.07 00:25:20.867 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5991.86 23.41 10722.91 1741.53 54560.30 00:25:20.867 ======================================================== 00:25:20.867 Total : 24393.44 95.29 10511.47 1741.53 56041.07 00:25:20.867 00:25:20.867 22:24:15 -- target/perf_adq.sh@104 -- # nvmftestfini 00:25:20.867 22:24:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:20.867 22:24:15 -- nvmf/common.sh@116 -- # sync 00:25:20.867 22:24:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:20.867 22:24:15 -- nvmf/common.sh@119 -- # set +e 00:25:20.867 22:24:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:20.867 22:24:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:20.867 rmmod nvme_tcp 00:25:20.867 rmmod nvme_fabrics 00:25:21.126 rmmod nvme_keyring 00:25:21.126 22:24:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:21.126 22:24:16 -- nvmf/common.sh@123 -- # set -e 00:25:21.126 22:24:16 -- nvmf/common.sh@124 -- # return 0 00:25:21.126 22:24:16 -- nvmf/common.sh@477 -- # '[' -n 3654223 ']' 00:25:21.126 22:24:16 -- nvmf/common.sh@478 -- # killprocess 3654223 00:25:21.126 22:24:16 -- common/autotest_common.sh@926 -- # '[' -z 3654223 ']' 00:25:21.126 22:24:16 -- common/autotest_common.sh@930 -- # kill -0 3654223 00:25:21.126 22:24:16 -- common/autotest_common.sh@931 -- # uname 00:25:21.126 22:24:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:21.126 22:24:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3654223 00:25:21.126 22:24:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:21.126 22:24:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:21.126 22:24:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3654223' 00:25:21.126 killing process with pid 3654223 00:25:21.126 22:24:16 -- common/autotest_common.sh@945 -- # kill 3654223 00:25:21.126 22:24:16 -- common/autotest_common.sh@950 -- # wait 3654223 00:25:21.386 22:24:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:21.386 22:24:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:21.386 22:24:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:21.386 22:24:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:21.386 22:24:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:21.386 22:24:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.386 22:24:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.386 22:24:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.677 22:24:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:24.677 22:24:19 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:25:24.677 00:25:24.677 real 0m48.624s 00:25:24.677 user 2m42.545s 00:25:24.677 sys 0m10.059s 00:25:24.677 22:24:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.677 22:24:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.677 ************************************ 00:25:24.677 END TEST nvmf_perf_adq 00:25:24.677 ************************************ 00:25:24.677 22:24:19 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:24.677 22:24:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:24.677 22:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.677 22:24:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.677 ************************************ 00:25:24.677 START TEST nvmf_shutdown 00:25:24.677 ************************************ 00:25:24.677 22:24:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:25:24.677 * Looking for test storage... 00:25:24.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:24.677 22:24:19 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.677 22:24:19 -- nvmf/common.sh@7 -- # uname -s 00:25:24.677 22:24:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.677 22:24:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.677 22:24:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.677 22:24:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.677 22:24:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.677 22:24:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.677 22:24:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.677 22:24:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.677 22:24:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.677 22:24:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.677 22:24:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:24.677 22:24:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:24.677 22:24:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.677 22:24:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.677 22:24:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.677 22:24:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.677 22:24:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.677 22:24:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.677 22:24:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.677 22:24:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.677 22:24:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.677 22:24:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.677 22:24:19 -- paths/export.sh@5 -- # export PATH 00:25:24.677 22:24:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.677 22:24:19 -- nvmf/common.sh@46 -- # : 0 00:25:24.677 22:24:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:24.677 22:24:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:24.677 22:24:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:24.677 22:24:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.677 22:24:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.677 22:24:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:24.677 22:24:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:24.677 22:24:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:24.677 22:24:19 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:24.677 22:24:19 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:24.677 22:24:19 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:25:24.677 22:24:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:24.677 22:24:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.677 22:24:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.677 ************************************ 00:25:24.677 START TEST nvmf_shutdown_tc1 00:25:24.677 ************************************ 00:25:24.677 22:24:19 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:25:24.677 22:24:19 -- target/shutdown.sh@74 -- # starttarget 00:25:24.677 22:24:19 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:24.677 22:24:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:24.677 22:24:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.677 22:24:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:24.677 22:24:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:24.677 22:24:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:24.677 22:24:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.677 22:24:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.677 22:24:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.677 22:24:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:24.677 22:24:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:24.677 22:24:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:24.677 22:24:19 -- common/autotest_common.sh@10 -- # set +x 00:25:29.955 22:24:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:29.955 22:24:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:29.955 22:24:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:29.955 22:24:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:29.955 22:24:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:29.955 22:24:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:29.955 22:24:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:29.955 22:24:24 -- nvmf/common.sh@294 -- # net_devs=() 00:25:29.955 22:24:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:29.955 22:24:24 -- nvmf/common.sh@295 -- # e810=() 00:25:29.955 22:24:24 -- nvmf/common.sh@295 -- # local -ga e810 00:25:29.955 22:24:24 -- nvmf/common.sh@296 -- # x722=() 00:25:29.955 22:24:24 -- nvmf/common.sh@296 -- # local -ga x722 00:25:29.955 22:24:24 -- nvmf/common.sh@297 -- # mlx=() 00:25:29.955 22:24:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:29.955 22:24:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.955 22:24:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:29.955 22:24:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:29.955 22:24:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:29.955 22:24:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:29.955 22:24:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:29.955 22:24:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:29.955 22:24:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.955 22:24:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:29.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:29.955 22:24:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.956 22:24:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:29.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:29.956 22:24:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:29.956 22:24:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.956 22:24:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.956 22:24:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.956 22:24:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.956 22:24:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:29.956 Found net devices under 0000:86:00.0: cvl_0_0 00:25:29.956 22:24:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.956 22:24:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.956 22:24:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.956 22:24:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.956 22:24:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.956 22:24:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:29.956 Found net devices under 0000:86:00.1: cvl_0_1 00:25:29.956 22:24:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.956 22:24:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:29.956 22:24:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:29.956 22:24:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:29.956 22:24:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.956 22:24:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.956 22:24:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.956 22:24:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:29.956 22:24:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.956 22:24:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.956 22:24:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:29.956 22:24:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.956 22:24:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.956 22:24:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:29.956 22:24:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:29.956 22:24:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.956 22:24:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.956 22:24:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.956 22:24:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.956 22:24:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:29.956 22:24:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.956 22:24:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.956 22:24:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.956 22:24:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:29.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:25:29.956 00:25:29.956 --- 10.0.0.2 ping statistics --- 00:25:29.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.956 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:25:29.956 22:24:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.363 ms 00:25:29.956 00:25:29.956 --- 10.0.0.1 ping statistics --- 00:25:29.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.956 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:25:29.956 22:24:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.956 22:24:24 -- nvmf/common.sh@410 -- # return 0 00:25:29.956 22:24:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:29.956 22:24:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.956 22:24:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:29.956 22:24:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.956 22:24:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:29.956 22:24:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:29.956 22:24:25 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:29.956 22:24:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:29.956 22:24:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:29.956 22:24:25 -- common/autotest_common.sh@10 -- # set +x 00:25:29.956 22:24:25 -- nvmf/common.sh@469 -- # nvmfpid=3660124 00:25:29.956 22:24:25 -- nvmf/common.sh@470 -- # waitforlisten 3660124 00:25:29.956 22:24:25 -- common/autotest_common.sh@819 -- # '[' -z 3660124 ']' 00:25:29.956 22:24:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.956 22:24:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:29.956 22:24:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.956 22:24:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:29.956 22:24:25 -- common/autotest_common.sh@10 -- # set +x 00:25:29.956 22:24:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:29.956 [2024-07-24 22:24:25.057471] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:29.956 [2024-07-24 22:24:25.057512] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.956 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.215 [2024-07-24 22:24:25.114782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.215 [2024-07-24 22:24:25.154346] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:30.215 [2024-07-24 22:24:25.154458] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.215 [2024-07-24 22:24:25.154465] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.215 [2024-07-24 22:24:25.154472] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.215 [2024-07-24 22:24:25.154575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.215 [2024-07-24 22:24:25.154668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.215 [2024-07-24 22:24:25.154778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.215 [2024-07-24 22:24:25.154779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:30.782 22:24:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:30.782 22:24:25 -- common/autotest_common.sh@852 -- # return 0 00:25:30.782 22:24:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:30.783 22:24:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:30.783 22:24:25 -- common/autotest_common.sh@10 -- # set +x 00:25:30.783 22:24:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.783 22:24:25 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:30.783 22:24:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.783 22:24:25 -- common/autotest_common.sh@10 -- # set +x 00:25:30.783 [2024-07-24 22:24:25.893510] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.783 22:24:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.783 22:24:25 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:30.783 22:24:25 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:30.783 22:24:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:30.783 22:24:25 -- common/autotest_common.sh@10 -- # set +x 00:25:30.783 22:24:25 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:30.783 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.783 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:30.783 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:30.783 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:31.042 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:31.042 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:31.042 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:31.042 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:31.042 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:31.042 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:31.042 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:31.042 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:31.042 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:31.042 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:31.042 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:31.042 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:31.042 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:31.042 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:31.042 22:24:25 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:31.042 22:24:25 -- target/shutdown.sh@28 -- # cat 00:25:31.042 22:24:25 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:31.042 22:24:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.042 22:24:25 -- common/autotest_common.sh@10 -- # set +x 00:25:31.042 Malloc1 00:25:31.042 [2024-07-24 22:24:25.989648] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.042 Malloc2 00:25:31.042 Malloc3 00:25:31.042 Malloc4 00:25:31.042 Malloc5 00:25:31.301 Malloc6 00:25:31.301 Malloc7 00:25:31.301 Malloc8 00:25:31.301 Malloc9 00:25:31.301 Malloc10 00:25:31.301 22:24:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.301 22:24:26 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:31.301 22:24:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:31.301 22:24:26 -- common/autotest_common.sh@10 -- # set +x 00:25:31.301 22:24:26 -- target/shutdown.sh@78 -- # perfpid=3660401 00:25:31.301 22:24:26 -- target/shutdown.sh@79 -- # waitforlisten 3660401 /var/tmp/bdevperf.sock 00:25:31.301 22:24:26 -- common/autotest_common.sh@819 -- # '[' -z 3660401 ']' 00:25:31.301 22:24:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.301 22:24:26 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:25:31.301 22:24:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:31.301 22:24:26 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:31.301 22:24:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.301 22:24:26 -- nvmf/common.sh@520 -- # config=() 00:25:31.301 22:24:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:31.301 22:24:26 -- nvmf/common.sh@520 -- # local subsystem config 00:25:31.301 22:24:26 -- common/autotest_common.sh@10 -- # set +x 00:25:31.301 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.301 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.301 { 00:25:31.301 "params": { 00:25:31.301 "name": "Nvme$subsystem", 00:25:31.301 "trtype": "$TEST_TRANSPORT", 00:25:31.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.301 "adrfam": "ipv4", 00:25:31.301 "trsvcid": "$NVMF_PORT", 00:25:31.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.301 "hdgst": ${hdgst:-false}, 00:25:31.301 "ddgst": ${ddgst:-false} 00:25:31.301 }, 00:25:31.301 "method": "bdev_nvme_attach_controller" 00:25:31.301 } 00:25:31.301 EOF 00:25:31.301 )") 00:25:31.301 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.301 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.301 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.301 { 00:25:31.301 "params": { 00:25:31.301 "name": "Nvme$subsystem", 00:25:31.301 "trtype": "$TEST_TRANSPORT", 00:25:31.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.301 "adrfam": "ipv4", 00:25:31.301 "trsvcid": "$NVMF_PORT", 00:25:31.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.301 "hdgst": ${hdgst:-false}, 00:25:31.301 "ddgst": ${ddgst:-false} 00:25:31.301 }, 00:25:31.301 "method": "bdev_nvme_attach_controller" 00:25:31.301 } 00:25:31.301 EOF 00:25:31.301 )") 00:25:31.301 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.301 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.301 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.301 { 00:25:31.301 "params": { 00:25:31.301 "name": "Nvme$subsystem", 00:25:31.301 "trtype": "$TEST_TRANSPORT", 00:25:31.301 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.301 "adrfam": "ipv4", 00:25:31.301 "trsvcid": "$NVMF_PORT", 00:25:31.301 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.301 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.301 "hdgst": ${hdgst:-false}, 00:25:31.301 "ddgst": ${ddgst:-false} 00:25:31.301 }, 00:25:31.301 "method": "bdev_nvme_attach_controller" 00:25:31.301 } 00:25:31.301 EOF 00:25:31.301 )") 00:25:31.301 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.561 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.561 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.561 { 00:25:31.561 "params": { 00:25:31.561 "name": "Nvme$subsystem", 00:25:31.561 "trtype": "$TEST_TRANSPORT", 00:25:31.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.561 "adrfam": "ipv4", 00:25:31.561 "trsvcid": "$NVMF_PORT", 00:25:31.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.561 "hdgst": ${hdgst:-false}, 00:25:31.561 "ddgst": ${ddgst:-false} 00:25:31.561 }, 00:25:31.561 "method": "bdev_nvme_attach_controller" 00:25:31.561 } 00:25:31.561 EOF 00:25:31.561 )") 00:25:31.561 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.561 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.561 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.561 { 00:25:31.561 "params": { 00:25:31.561 "name": "Nvme$subsystem", 00:25:31.561 "trtype": "$TEST_TRANSPORT", 00:25:31.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.561 "adrfam": "ipv4", 00:25:31.561 "trsvcid": "$NVMF_PORT", 00:25:31.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.561 "hdgst": ${hdgst:-false}, 00:25:31.561 "ddgst": ${ddgst:-false} 00:25:31.561 }, 00:25:31.561 "method": "bdev_nvme_attach_controller" 00:25:31.561 } 00:25:31.561 EOF 00:25:31.561 )") 00:25:31.561 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.561 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.561 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.561 { 00:25:31.561 "params": { 00:25:31.561 "name": "Nvme$subsystem", 00:25:31.561 "trtype": "$TEST_TRANSPORT", 00:25:31.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.561 "adrfam": "ipv4", 00:25:31.561 "trsvcid": "$NVMF_PORT", 00:25:31.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.561 "hdgst": ${hdgst:-false}, 00:25:31.561 "ddgst": ${ddgst:-false} 00:25:31.561 }, 00:25:31.561 "method": "bdev_nvme_attach_controller" 00:25:31.561 } 00:25:31.561 EOF 00:25:31.561 )") 00:25:31.561 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.561 [2024-07-24 22:24:26.455990] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:31.561 [2024-07-24 22:24:26.456036] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:31.561 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.561 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.561 { 00:25:31.561 "params": { 00:25:31.561 "name": "Nvme$subsystem", 00:25:31.561 "trtype": "$TEST_TRANSPORT", 00:25:31.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.561 "adrfam": "ipv4", 00:25:31.561 "trsvcid": "$NVMF_PORT", 00:25:31.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.561 "hdgst": ${hdgst:-false}, 00:25:31.561 "ddgst": ${ddgst:-false} 00:25:31.561 }, 00:25:31.561 "method": "bdev_nvme_attach_controller" 00:25:31.561 } 00:25:31.561 EOF 00:25:31.561 )") 00:25:31.561 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.561 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.561 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.561 { 00:25:31.561 "params": { 00:25:31.561 "name": "Nvme$subsystem", 00:25:31.561 "trtype": "$TEST_TRANSPORT", 00:25:31.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.561 "adrfam": "ipv4", 00:25:31.561 "trsvcid": "$NVMF_PORT", 00:25:31.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.561 "hdgst": ${hdgst:-false}, 00:25:31.561 "ddgst": ${ddgst:-false} 00:25:31.561 }, 00:25:31.561 "method": "bdev_nvme_attach_controller" 00:25:31.562 } 00:25:31.562 EOF 00:25:31.562 )") 00:25:31.562 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.562 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.562 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.562 { 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme$subsystem", 00:25:31.562 "trtype": "$TEST_TRANSPORT", 00:25:31.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "$NVMF_PORT", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.562 "hdgst": ${hdgst:-false}, 00:25:31.562 "ddgst": ${ddgst:-false} 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 } 00:25:31.562 EOF 00:25:31.562 )") 00:25:31.562 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.562 22:24:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:31.562 22:24:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:31.562 { 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme$subsystem", 00:25:31.562 "trtype": "$TEST_TRANSPORT", 00:25:31.562 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "$NVMF_PORT", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:31.562 "hdgst": ${hdgst:-false}, 00:25:31.562 "ddgst": ${ddgst:-false} 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 } 00:25:31.562 EOF 00:25:31.562 )") 00:25:31.562 22:24:26 -- nvmf/common.sh@542 -- # cat 00:25:31.562 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.562 22:24:26 -- nvmf/common.sh@544 -- # jq . 00:25:31.562 22:24:26 -- nvmf/common.sh@545 -- # IFS=, 00:25:31.562 22:24:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme1", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 },{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme2", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 },{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme3", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 },{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme4", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 },{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme5", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 },{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme6", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 },{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme7", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 },{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme8", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 },{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme9", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 },{ 00:25:31.562 "params": { 00:25:31.562 "name": "Nvme10", 00:25:31.562 "trtype": "tcp", 00:25:31.562 "traddr": "10.0.0.2", 00:25:31.562 "adrfam": "ipv4", 00:25:31.562 "trsvcid": "4420", 00:25:31.562 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:31.562 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:31.562 "hdgst": false, 00:25:31.562 "ddgst": false 00:25:31.562 }, 00:25:31.562 "method": "bdev_nvme_attach_controller" 00:25:31.562 }' 00:25:31.562 [2024-07-24 22:24:26.512342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.562 [2024-07-24 22:24:26.550318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.468 22:24:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:33.468 22:24:28 -- common/autotest_common.sh@852 -- # return 0 00:25:33.468 22:24:28 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:33.468 22:24:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.468 22:24:28 -- common/autotest_common.sh@10 -- # set +x 00:25:33.727 22:24:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.727 22:24:28 -- target/shutdown.sh@83 -- # kill -9 3660401 00:25:33.727 22:24:28 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:25:33.727 22:24:28 -- target/shutdown.sh@87 -- # sleep 1 00:25:34.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3660401 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:25:34.666 22:24:29 -- target/shutdown.sh@88 -- # kill -0 3660124 00:25:34.666 22:24:29 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:25:34.666 22:24:29 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:34.666 22:24:29 -- nvmf/common.sh@520 -- # config=() 00:25:34.666 22:24:29 -- nvmf/common.sh@520 -- # local subsystem config 00:25:34.666 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.666 { 00:25:34.666 "params": { 00:25:34.666 "name": "Nvme$subsystem", 00:25:34.666 "trtype": "$TEST_TRANSPORT", 00:25:34.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.666 "adrfam": "ipv4", 00:25:34.666 "trsvcid": "$NVMF_PORT", 00:25:34.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.666 "hdgst": ${hdgst:-false}, 00:25:34.666 "ddgst": ${ddgst:-false} 00:25:34.666 }, 00:25:34.666 "method": "bdev_nvme_attach_controller" 00:25:34.666 } 00:25:34.666 EOF 00:25:34.666 )") 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.666 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.666 { 00:25:34.666 "params": { 00:25:34.666 "name": "Nvme$subsystem", 00:25:34.666 "trtype": "$TEST_TRANSPORT", 00:25:34.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.666 "adrfam": "ipv4", 00:25:34.666 "trsvcid": "$NVMF_PORT", 00:25:34.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.666 "hdgst": ${hdgst:-false}, 00:25:34.666 "ddgst": ${ddgst:-false} 00:25:34.666 }, 00:25:34.666 "method": "bdev_nvme_attach_controller" 00:25:34.666 } 00:25:34.666 EOF 00:25:34.666 )") 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.666 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.666 { 00:25:34.666 "params": { 00:25:34.666 "name": "Nvme$subsystem", 00:25:34.666 "trtype": "$TEST_TRANSPORT", 00:25:34.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.666 "adrfam": "ipv4", 00:25:34.666 "trsvcid": "$NVMF_PORT", 00:25:34.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.666 "hdgst": ${hdgst:-false}, 00:25:34.666 "ddgst": ${ddgst:-false} 00:25:34.666 }, 00:25:34.666 "method": "bdev_nvme_attach_controller" 00:25:34.666 } 00:25:34.666 EOF 00:25:34.666 )") 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.666 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.666 { 00:25:34.666 "params": { 00:25:34.666 "name": "Nvme$subsystem", 00:25:34.666 "trtype": "$TEST_TRANSPORT", 00:25:34.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.666 "adrfam": "ipv4", 00:25:34.666 "trsvcid": "$NVMF_PORT", 00:25:34.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.666 "hdgst": ${hdgst:-false}, 00:25:34.666 "ddgst": ${ddgst:-false} 00:25:34.666 }, 00:25:34.666 "method": "bdev_nvme_attach_controller" 00:25:34.666 } 00:25:34.666 EOF 00:25:34.666 )") 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.666 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.666 { 00:25:34.666 "params": { 00:25:34.666 "name": "Nvme$subsystem", 00:25:34.666 "trtype": "$TEST_TRANSPORT", 00:25:34.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.666 "adrfam": "ipv4", 00:25:34.666 "trsvcid": "$NVMF_PORT", 00:25:34.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.666 "hdgst": ${hdgst:-false}, 00:25:34.666 "ddgst": ${ddgst:-false} 00:25:34.666 }, 00:25:34.666 "method": "bdev_nvme_attach_controller" 00:25:34.666 } 00:25:34.666 EOF 00:25:34.666 )") 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.666 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.666 { 00:25:34.666 "params": { 00:25:34.666 "name": "Nvme$subsystem", 00:25:34.666 "trtype": "$TEST_TRANSPORT", 00:25:34.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.666 "adrfam": "ipv4", 00:25:34.666 "trsvcid": "$NVMF_PORT", 00:25:34.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.666 "hdgst": ${hdgst:-false}, 00:25:34.666 "ddgst": ${ddgst:-false} 00:25:34.666 }, 00:25:34.666 "method": "bdev_nvme_attach_controller" 00:25:34.666 } 00:25:34.666 EOF 00:25:34.666 )") 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.666 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.666 { 00:25:34.666 "params": { 00:25:34.666 "name": "Nvme$subsystem", 00:25:34.666 "trtype": "$TEST_TRANSPORT", 00:25:34.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.666 "adrfam": "ipv4", 00:25:34.666 "trsvcid": "$NVMF_PORT", 00:25:34.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.666 "hdgst": ${hdgst:-false}, 00:25:34.666 "ddgst": ${ddgst:-false} 00:25:34.666 }, 00:25:34.666 "method": "bdev_nvme_attach_controller" 00:25:34.666 } 00:25:34.666 EOF 00:25:34.666 )") 00:25:34.666 [2024-07-24 22:24:29.660197] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:34.666 [2024-07-24 22:24:29.660247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660911 ] 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.666 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.666 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.666 { 00:25:34.666 "params": { 00:25:34.666 "name": "Nvme$subsystem", 00:25:34.666 "trtype": "$TEST_TRANSPORT", 00:25:34.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.666 "adrfam": "ipv4", 00:25:34.666 "trsvcid": "$NVMF_PORT", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.667 "hdgst": ${hdgst:-false}, 00:25:34.667 "ddgst": ${ddgst:-false} 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 } 00:25:34.667 EOF 00:25:34.667 )") 00:25:34.667 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.667 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.667 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.667 { 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme$subsystem", 00:25:34.667 "trtype": "$TEST_TRANSPORT", 00:25:34.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "$NVMF_PORT", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.667 "hdgst": ${hdgst:-false}, 00:25:34.667 "ddgst": ${ddgst:-false} 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 } 00:25:34.667 EOF 00:25:34.667 )") 00:25:34.667 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.667 22:24:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:34.667 22:24:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:34.667 { 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme$subsystem", 00:25:34.667 "trtype": "$TEST_TRANSPORT", 00:25:34.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "$NVMF_PORT", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.667 "hdgst": ${hdgst:-false}, 00:25:34.667 "ddgst": ${ddgst:-false} 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 } 00:25:34.667 EOF 00:25:34.667 )") 00:25:34.667 22:24:29 -- nvmf/common.sh@542 -- # cat 00:25:34.667 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.667 22:24:29 -- nvmf/common.sh@544 -- # jq . 00:25:34.667 22:24:29 -- nvmf/common.sh@545 -- # IFS=, 00:25:34.667 22:24:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme1", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 },{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme2", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 },{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme3", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 },{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme4", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 },{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme5", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 },{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme6", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 },{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme7", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 },{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme8", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 },{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme9", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 },{ 00:25:34.667 "params": { 00:25:34.667 "name": "Nvme10", 00:25:34.667 "trtype": "tcp", 00:25:34.667 "traddr": "10.0.0.2", 00:25:34.667 "adrfam": "ipv4", 00:25:34.667 "trsvcid": "4420", 00:25:34.667 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:34.667 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:34.667 "hdgst": false, 00:25:34.667 "ddgst": false 00:25:34.667 }, 00:25:34.667 "method": "bdev_nvme_attach_controller" 00:25:34.667 }' 00:25:34.667 [2024-07-24 22:24:29.717204] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.667 [2024-07-24 22:24:29.755089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.575 Running I/O for 1 seconds... 00:25:37.515 00:25:37.515 Latency(us) 00:25:37.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.515 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme1n1 : 1.08 403.41 25.21 0.00 0.00 156265.74 14588.88 168683.97 00:25:37.515 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme2n1 : 1.07 498.01 31.13 0.00 0.00 125460.20 9630.94 128564.54 00:25:37.515 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme3n1 : 1.07 487.51 30.47 0.00 0.00 127887.33 13677.08 114431.55 00:25:37.515 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme4n1 : 1.07 488.92 30.56 0.00 0.00 126458.93 16868.40 97107.26 00:25:37.515 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme5n1 : 1.09 399.38 24.96 0.00 0.00 154533.55 6753.06 140418.00 00:25:37.515 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme6n1 : 1.07 454.32 28.40 0.00 0.00 133936.82 21313.45 112607.94 00:25:37.515 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme7n1 : 1.09 397.91 24.87 0.00 0.00 153361.92 9232.03 134035.37 00:25:37.515 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme8n1 : 1.08 486.18 30.39 0.00 0.00 124398.37 11283.59 109872.53 00:25:37.515 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme9n1 : 1.09 500.33 31.27 0.00 0.00 119690.31 4986.43 104401.70 00:25:37.515 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:37.515 Verification LBA range: start 0x0 length 0x400 00:25:37.515 Nvme10n1 : 1.09 534.99 33.44 0.00 0.00 111817.81 10314.80 97563.16 00:25:37.515 =================================================================================================================== 00:25:37.515 Total : 4650.97 290.69 0.00 0.00 131917.68 4986.43 168683.97 00:25:37.515 22:24:32 -- target/shutdown.sh@93 -- # stoptarget 00:25:37.515 22:24:32 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:37.515 22:24:32 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:37.515 22:24:32 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:37.515 22:24:32 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:37.515 22:24:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:37.515 22:24:32 -- nvmf/common.sh@116 -- # sync 00:25:37.515 22:24:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:37.515 22:24:32 -- nvmf/common.sh@119 -- # set +e 00:25:37.515 22:24:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:37.515 22:24:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:37.515 rmmod nvme_tcp 00:25:37.515 rmmod nvme_fabrics 00:25:37.515 rmmod nvme_keyring 00:25:37.515 22:24:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:37.515 22:24:32 -- nvmf/common.sh@123 -- # set -e 00:25:37.515 22:24:32 -- nvmf/common.sh@124 -- # return 0 00:25:37.515 22:24:32 -- nvmf/common.sh@477 -- # '[' -n 3660124 ']' 00:25:37.515 22:24:32 -- nvmf/common.sh@478 -- # killprocess 3660124 00:25:37.515 22:24:32 -- common/autotest_common.sh@926 -- # '[' -z 3660124 ']' 00:25:37.515 22:24:32 -- common/autotest_common.sh@930 -- # kill -0 3660124 00:25:37.775 22:24:32 -- common/autotest_common.sh@931 -- # uname 00:25:37.775 22:24:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:37.775 22:24:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3660124 00:25:37.775 22:24:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:37.775 22:24:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:37.775 22:24:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3660124' 00:25:37.775 killing process with pid 3660124 00:25:37.775 22:24:32 -- common/autotest_common.sh@945 -- # kill 3660124 00:25:37.775 22:24:32 -- common/autotest_common.sh@950 -- # wait 3660124 00:25:38.035 22:24:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:38.035 22:24:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:38.035 22:24:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:38.035 22:24:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:38.035 22:24:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:38.035 22:24:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.035 22:24:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:38.035 22:24:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.570 22:24:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:40.570 00:25:40.570 real 0m15.635s 00:25:40.570 user 0m37.867s 00:25:40.570 sys 0m5.511s 00:25:40.570 22:24:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:40.570 22:24:35 -- common/autotest_common.sh@10 -- # set +x 00:25:40.570 ************************************ 00:25:40.570 END TEST nvmf_shutdown_tc1 00:25:40.570 ************************************ 00:25:40.570 22:24:35 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:40.570 22:24:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:40.570 22:24:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:40.570 22:24:35 -- common/autotest_common.sh@10 -- # set +x 00:25:40.570 ************************************ 00:25:40.570 START TEST nvmf_shutdown_tc2 00:25:40.570 ************************************ 00:25:40.570 22:24:35 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:25:40.570 22:24:35 -- target/shutdown.sh@98 -- # starttarget 00:25:40.570 22:24:35 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:40.570 22:24:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:40.570 22:24:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.570 22:24:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:40.570 22:24:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:40.570 22:24:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:40.570 22:24:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.570 22:24:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:40.570 22:24:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.570 22:24:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:40.570 22:24:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:40.570 22:24:35 -- common/autotest_common.sh@10 -- # set +x 00:25:40.570 22:24:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:40.570 22:24:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:40.570 22:24:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:40.570 22:24:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:40.570 22:24:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:40.570 22:24:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:40.570 22:24:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:40.570 22:24:35 -- nvmf/common.sh@294 -- # net_devs=() 00:25:40.570 22:24:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:40.570 22:24:35 -- nvmf/common.sh@295 -- # e810=() 00:25:40.570 22:24:35 -- nvmf/common.sh@295 -- # local -ga e810 00:25:40.570 22:24:35 -- nvmf/common.sh@296 -- # x722=() 00:25:40.570 22:24:35 -- nvmf/common.sh@296 -- # local -ga x722 00:25:40.570 22:24:35 -- nvmf/common.sh@297 -- # mlx=() 00:25:40.570 22:24:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:40.570 22:24:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.570 22:24:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:40.570 22:24:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:40.570 22:24:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:40.570 22:24:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:40.570 22:24:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:40.570 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:40.570 22:24:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:40.570 22:24:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:40.570 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:40.570 22:24:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:40.570 22:24:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:40.570 22:24:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:40.570 22:24:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.570 22:24:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:40.571 22:24:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.571 22:24:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:40.571 Found net devices under 0000:86:00.0: cvl_0_0 00:25:40.571 22:24:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.571 22:24:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:40.571 22:24:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.571 22:24:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:40.571 22:24:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.571 22:24:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:40.571 Found net devices under 0000:86:00.1: cvl_0_1 00:25:40.571 22:24:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.571 22:24:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:40.571 22:24:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:40.571 22:24:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:40.571 22:24:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:40.571 22:24:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:40.571 22:24:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.571 22:24:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.571 22:24:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.571 22:24:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:40.571 22:24:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.571 22:24:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.571 22:24:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:40.571 22:24:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.571 22:24:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.571 22:24:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:40.571 22:24:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:40.571 22:24:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.571 22:24:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.571 22:24:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.571 22:24:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.571 22:24:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:40.571 22:24:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.571 22:24:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.571 22:24:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.571 22:24:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:40.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:25:40.571 00:25:40.571 --- 10.0.0.2 ping statistics --- 00:25:40.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.571 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:25:40.571 22:24:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:25:40.571 00:25:40.571 --- 10.0.0.1 ping statistics --- 00:25:40.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.571 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:25:40.571 22:24:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.571 22:24:35 -- nvmf/common.sh@410 -- # return 0 00:25:40.571 22:24:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:40.571 22:24:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.571 22:24:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:40.571 22:24:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:40.571 22:24:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.571 22:24:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:40.571 22:24:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:40.571 22:24:35 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:40.571 22:24:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:40.571 22:24:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:40.571 22:24:35 -- common/autotest_common.sh@10 -- # set +x 00:25:40.571 22:24:35 -- nvmf/common.sh@469 -- # nvmfpid=3661945 00:25:40.571 22:24:35 -- nvmf/common.sh@470 -- # waitforlisten 3661945 00:25:40.571 22:24:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:40.571 22:24:35 -- common/autotest_common.sh@819 -- # '[' -z 3661945 ']' 00:25:40.571 22:24:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.571 22:24:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:40.571 22:24:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.571 22:24:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:40.571 22:24:35 -- common/autotest_common.sh@10 -- # set +x 00:25:40.571 [2024-07-24 22:24:35.503916] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:40.571 [2024-07-24 22:24:35.503956] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.571 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.571 [2024-07-24 22:24:35.560291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:40.571 [2024-07-24 22:24:35.599774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:40.571 [2024-07-24 22:24:35.599886] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.571 [2024-07-24 22:24:35.599894] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.571 [2024-07-24 22:24:35.599901] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.571 [2024-07-24 22:24:35.600002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.571 [2024-07-24 22:24:35.600086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.571 [2024-07-24 22:24:35.600192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.571 [2024-07-24 22:24:35.600193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:41.542 22:24:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:41.542 22:24:36 -- common/autotest_common.sh@852 -- # return 0 00:25:41.542 22:24:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:41.542 22:24:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:41.542 22:24:36 -- common/autotest_common.sh@10 -- # set +x 00:25:41.542 22:24:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.542 22:24:36 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:41.542 22:24:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.542 22:24:36 -- common/autotest_common.sh@10 -- # set +x 00:25:41.542 [2024-07-24 22:24:36.349476] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.542 22:24:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.542 22:24:36 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:41.542 22:24:36 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:41.542 22:24:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:41.542 22:24:36 -- common/autotest_common.sh@10 -- # set +x 00:25:41.542 22:24:36 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:41.542 22:24:36 -- target/shutdown.sh@28 -- # cat 00:25:41.542 22:24:36 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:41.542 22:24:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.542 22:24:36 -- common/autotest_common.sh@10 -- # set +x 00:25:41.542 Malloc1 00:25:41.542 [2024-07-24 22:24:36.445557] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.542 Malloc2 00:25:41.542 Malloc3 00:25:41.542 Malloc4 00:25:41.542 Malloc5 00:25:41.542 Malloc6 00:25:41.542 Malloc7 00:25:41.802 Malloc8 00:25:41.802 Malloc9 00:25:41.802 Malloc10 00:25:41.802 22:24:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.802 22:24:36 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:41.802 22:24:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:41.802 22:24:36 -- common/autotest_common.sh@10 -- # set +x 00:25:41.802 22:24:36 -- target/shutdown.sh@102 -- # perfpid=3662233 00:25:41.802 22:24:36 -- target/shutdown.sh@103 -- # waitforlisten 3662233 /var/tmp/bdevperf.sock 00:25:41.802 22:24:36 -- common/autotest_common.sh@819 -- # '[' -z 3662233 ']' 00:25:41.802 22:24:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.802 22:24:36 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:41.802 22:24:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:41.802 22:24:36 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:41.802 22:24:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.802 22:24:36 -- nvmf/common.sh@520 -- # config=() 00:25:41.802 22:24:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:41.802 22:24:36 -- nvmf/common.sh@520 -- # local subsystem config 00:25:41.802 22:24:36 -- common/autotest_common.sh@10 -- # set +x 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:41.802 [2024-07-24 22:24:36.912413] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:41.802 [2024-07-24 22:24:36.912459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662233 ] 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:41.802 22:24:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:41.802 22:24:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:41.802 { 00:25:41.802 "params": { 00:25:41.802 "name": "Nvme$subsystem", 00:25:41.802 "trtype": "$TEST_TRANSPORT", 00:25:41.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:41.802 "adrfam": "ipv4", 00:25:41.802 "trsvcid": "$NVMF_PORT", 00:25:41.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:41.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:41.802 "hdgst": ${hdgst:-false}, 00:25:41.802 "ddgst": ${ddgst:-false} 00:25:41.802 }, 00:25:41.802 "method": "bdev_nvme_attach_controller" 00:25:41.802 } 00:25:41.802 EOF 00:25:41.802 )") 00:25:42.061 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.061 22:24:36 -- nvmf/common.sh@542 -- # cat 00:25:42.061 22:24:36 -- nvmf/common.sh@544 -- # jq . 00:25:42.061 22:24:36 -- nvmf/common.sh@545 -- # IFS=, 00:25:42.061 22:24:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:42.061 "params": { 00:25:42.061 "name": "Nvme1", 00:25:42.061 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 },{ 00:25:42.062 "params": { 00:25:42.062 "name": "Nvme2", 00:25:42.062 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 },{ 00:25:42.062 "params": { 00:25:42.062 "name": "Nvme3", 00:25:42.062 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 },{ 00:25:42.062 "params": { 00:25:42.062 "name": "Nvme4", 00:25:42.062 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 },{ 00:25:42.062 "params": { 00:25:42.062 "name": "Nvme5", 00:25:42.062 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 },{ 00:25:42.062 "params": { 00:25:42.062 "name": "Nvme6", 00:25:42.062 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 },{ 00:25:42.062 "params": { 00:25:42.062 "name": "Nvme7", 00:25:42.062 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 },{ 00:25:42.062 "params": { 00:25:42.062 "name": "Nvme8", 00:25:42.062 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 },{ 00:25:42.062 "params": { 00:25:42.062 "name": "Nvme9", 00:25:42.062 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 },{ 00:25:42.062 "params": { 00:25:42.062 "name": "Nvme10", 00:25:42.062 "trtype": "tcp", 00:25:42.062 "traddr": "10.0.0.2", 00:25:42.062 "adrfam": "ipv4", 00:25:42.062 "trsvcid": "4420", 00:25:42.062 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:42.062 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:42.062 "hdgst": false, 00:25:42.062 "ddgst": false 00:25:42.062 }, 00:25:42.062 "method": "bdev_nvme_attach_controller" 00:25:42.062 }' 00:25:42.062 [2024-07-24 22:24:36.968200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.062 [2024-07-24 22:24:37.006025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.964 Running I/O for 10 seconds... 00:25:43.964 22:24:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:43.964 22:24:39 -- common/autotest_common.sh@852 -- # return 0 00:25:43.964 22:24:39 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:43.964 22:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.964 22:24:39 -- common/autotest_common.sh@10 -- # set +x 00:25:43.964 22:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.964 22:24:39 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:43.964 22:24:39 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:43.964 22:24:39 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:43.964 22:24:39 -- target/shutdown.sh@57 -- # local ret=1 00:25:43.964 22:24:39 -- target/shutdown.sh@58 -- # local i 00:25:43.964 22:24:39 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:43.964 22:24:39 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:43.964 22:24:39 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:43.964 22:24:39 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:43.964 22:24:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.964 22:24:39 -- common/autotest_common.sh@10 -- # set +x 00:25:44.224 22:24:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.224 22:24:39 -- target/shutdown.sh@60 -- # read_io_count=167 00:25:44.224 22:24:39 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:25:44.224 22:24:39 -- target/shutdown.sh@64 -- # ret=0 00:25:44.224 22:24:39 -- target/shutdown.sh@65 -- # break 00:25:44.224 22:24:39 -- target/shutdown.sh@69 -- # return 0 00:25:44.224 22:24:39 -- target/shutdown.sh@109 -- # killprocess 3662233 00:25:44.224 22:24:39 -- common/autotest_common.sh@926 -- # '[' -z 3662233 ']' 00:25:44.224 22:24:39 -- common/autotest_common.sh@930 -- # kill -0 3662233 00:25:44.224 22:24:39 -- common/autotest_common.sh@931 -- # uname 00:25:44.224 22:24:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:44.224 22:24:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3662233 00:25:44.224 22:24:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:44.224 22:24:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:44.224 22:24:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3662233' 00:25:44.224 killing process with pid 3662233 00:25:44.224 22:24:39 -- common/autotest_common.sh@945 -- # kill 3662233 00:25:44.224 22:24:39 -- common/autotest_common.sh@950 -- # wait 3662233 00:25:44.224 Received shutdown signal, test time was about 0.569519 seconds 00:25:44.224 00:25:44.224 Latency(us) 00:25:44.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.224 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme1n1 : 0.51 456.37 28.52 0.00 0.00 135828.60 6952.51 144977.03 00:25:44.224 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme2n1 : 0.53 434.10 27.13 0.00 0.00 141029.37 15956.59 108048.92 00:25:44.224 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme3n1 : 0.57 341.04 21.32 0.00 0.00 164967.35 5328.36 168683.97 00:25:44.224 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme4n1 : 0.52 436.89 27.31 0.00 0.00 136976.49 13107.20 119446.48 00:25:44.224 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme5n1 : 0.54 425.46 26.59 0.00 0.00 140319.77 7978.30 128564.54 00:25:44.224 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme6n1 : 0.52 367.07 22.94 0.00 0.00 157703.31 20287.67 138594.39 00:25:44.224 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme7n1 : 0.53 427.86 26.74 0.00 0.00 134883.69 12024.43 136770.78 00:25:44.224 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme8n1 : 0.53 429.48 26.84 0.00 0.00 131993.80 15158.76 118534.68 00:25:44.224 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme9n1 : 0.52 362.47 22.65 0.00 0.00 153365.04 18919.96 148624.25 00:25:44.224 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:44.224 Verification LBA range: start 0x0 length 0x400 00:25:44.224 Nvme10n1 : 0.56 408.68 25.54 0.00 0.00 125810.12 16640.45 130388.15 00:25:44.224 =================================================================================================================== 00:25:44.224 Total : 4089.44 255.59 0.00 0.00 141452.92 5328.36 168683.97 00:25:44.484 22:24:39 -- target/shutdown.sh@112 -- # sleep 1 00:25:45.421 22:24:40 -- target/shutdown.sh@113 -- # kill -0 3661945 00:25:45.421 22:24:40 -- target/shutdown.sh@115 -- # stoptarget 00:25:45.421 22:24:40 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:45.421 22:24:40 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:45.421 22:24:40 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:45.421 22:24:40 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:45.421 22:24:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:45.421 22:24:40 -- nvmf/common.sh@116 -- # sync 00:25:45.421 22:24:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:45.421 22:24:40 -- nvmf/common.sh@119 -- # set +e 00:25:45.421 22:24:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:45.421 22:24:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:45.421 rmmod nvme_tcp 00:25:45.421 rmmod nvme_fabrics 00:25:45.421 rmmod nvme_keyring 00:25:45.421 22:24:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:45.421 22:24:40 -- nvmf/common.sh@123 -- # set -e 00:25:45.421 22:24:40 -- nvmf/common.sh@124 -- # return 0 00:25:45.421 22:24:40 -- nvmf/common.sh@477 -- # '[' -n 3661945 ']' 00:25:45.421 22:24:40 -- nvmf/common.sh@478 -- # killprocess 3661945 00:25:45.421 22:24:40 -- common/autotest_common.sh@926 -- # '[' -z 3661945 ']' 00:25:45.421 22:24:40 -- common/autotest_common.sh@930 -- # kill -0 3661945 00:25:45.421 22:24:40 -- common/autotest_common.sh@931 -- # uname 00:25:45.421 22:24:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:45.421 22:24:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3661945 00:25:45.681 22:24:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:45.681 22:24:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:45.681 22:24:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3661945' 00:25:45.681 killing process with pid 3661945 00:25:45.681 22:24:40 -- common/autotest_common.sh@945 -- # kill 3661945 00:25:45.681 22:24:40 -- common/autotest_common.sh@950 -- # wait 3661945 00:25:45.940 22:24:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:45.940 22:24:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:45.940 22:24:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:45.940 22:24:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.940 22:24:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:45.940 22:24:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.940 22:24:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.940 22:24:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.480 22:24:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:48.480 00:25:48.480 real 0m7.845s 00:25:48.480 user 0m23.996s 00:25:48.480 sys 0m1.296s 00:25:48.480 22:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.480 22:24:43 -- common/autotest_common.sh@10 -- # set +x 00:25:48.480 ************************************ 00:25:48.480 END TEST nvmf_shutdown_tc2 00:25:48.480 ************************************ 00:25:48.480 22:24:43 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:48.480 22:24:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:48.480 22:24:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:48.480 22:24:43 -- common/autotest_common.sh@10 -- # set +x 00:25:48.480 ************************************ 00:25:48.480 START TEST nvmf_shutdown_tc3 00:25:48.480 ************************************ 00:25:48.480 22:24:43 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:48.480 22:24:43 -- target/shutdown.sh@120 -- # starttarget 00:25:48.480 22:24:43 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:48.480 22:24:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:48.480 22:24:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.480 22:24:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:48.480 22:24:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:48.480 22:24:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:48.480 22:24:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.480 22:24:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.480 22:24:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.480 22:24:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:48.480 22:24:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:48.480 22:24:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:48.480 22:24:43 -- common/autotest_common.sh@10 -- # set +x 00:25:48.480 22:24:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:48.480 22:24:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:48.480 22:24:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:48.480 22:24:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:48.480 22:24:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:48.480 22:24:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:48.480 22:24:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:48.480 22:24:43 -- nvmf/common.sh@294 -- # net_devs=() 00:25:48.480 22:24:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:48.480 22:24:43 -- nvmf/common.sh@295 -- # e810=() 00:25:48.480 22:24:43 -- nvmf/common.sh@295 -- # local -ga e810 00:25:48.480 22:24:43 -- nvmf/common.sh@296 -- # x722=() 00:25:48.481 22:24:43 -- nvmf/common.sh@296 -- # local -ga x722 00:25:48.481 22:24:43 -- nvmf/common.sh@297 -- # mlx=() 00:25:48.481 22:24:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:48.481 22:24:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.481 22:24:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:48.481 22:24:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:48.481 22:24:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:48.481 22:24:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.481 22:24:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:48.481 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:48.481 22:24:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:48.481 22:24:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:48.481 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:48.481 22:24:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:48.481 22:24:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.481 22:24:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.481 22:24:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.481 22:24:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.481 22:24:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:48.481 Found net devices under 0000:86:00.0: cvl_0_0 00:25:48.481 22:24:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.481 22:24:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:48.481 22:24:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.481 22:24:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:48.481 22:24:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.481 22:24:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:48.481 Found net devices under 0000:86:00.1: cvl_0_1 00:25:48.481 22:24:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.481 22:24:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:48.481 22:24:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:48.481 22:24:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:48.481 22:24:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.481 22:24:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.481 22:24:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.481 22:24:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:48.481 22:24:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.481 22:24:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.481 22:24:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:48.481 22:24:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.481 22:24:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.481 22:24:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:48.481 22:24:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:48.481 22:24:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.481 22:24:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.481 22:24:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.481 22:24:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.481 22:24:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:48.481 22:24:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.481 22:24:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.481 22:24:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.481 22:24:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:48.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:25:48.481 00:25:48.481 --- 10.0.0.2 ping statistics --- 00:25:48.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.481 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:48.481 22:24:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:25:48.481 00:25:48.481 --- 10.0.0.1 ping statistics --- 00:25:48.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.481 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:25:48.481 22:24:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.481 22:24:43 -- nvmf/common.sh@410 -- # return 0 00:25:48.481 22:24:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:48.481 22:24:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.481 22:24:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:48.481 22:24:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.481 22:24:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:48.481 22:24:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:48.481 22:24:43 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:48.481 22:24:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:48.481 22:24:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:48.481 22:24:43 -- common/autotest_common.sh@10 -- # set +x 00:25:48.481 22:24:43 -- nvmf/common.sh@469 -- # nvmfpid=3663510 00:25:48.481 22:24:43 -- nvmf/common.sh@470 -- # waitforlisten 3663510 00:25:48.481 22:24:43 -- common/autotest_common.sh@819 -- # '[' -z 3663510 ']' 00:25:48.481 22:24:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.481 22:24:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:48.481 22:24:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.481 22:24:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:48.481 22:24:43 -- common/autotest_common.sh@10 -- # set +x 00:25:48.481 22:24:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:48.481 [2024-07-24 22:24:43.374388] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:48.481 [2024-07-24 22:24:43.374430] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.481 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.481 [2024-07-24 22:24:43.432134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:48.481 [2024-07-24 22:24:43.471876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:48.481 [2024-07-24 22:24:43.471983] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.481 [2024-07-24 22:24:43.471992] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.481 [2024-07-24 22:24:43.471998] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.481 [2024-07-24 22:24:43.472099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:48.481 [2024-07-24 22:24:43.472184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:48.481 [2024-07-24 22:24:43.472290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:48.481 [2024-07-24 22:24:43.472291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:49.049 22:24:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:49.049 22:24:44 -- common/autotest_common.sh@852 -- # return 0 00:25:49.049 22:24:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:49.049 22:24:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:49.049 22:24:44 -- common/autotest_common.sh@10 -- # set +x 00:25:49.309 22:24:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:49.309 22:24:44 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:49.309 22:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.309 22:24:44 -- common/autotest_common.sh@10 -- # set +x 00:25:49.309 [2024-07-24 22:24:44.210475] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:49.309 22:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.309 22:24:44 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:49.309 22:24:44 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:49.309 22:24:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:49.309 22:24:44 -- common/autotest_common.sh@10 -- # set +x 00:25:49.309 22:24:44 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:49.309 22:24:44 -- target/shutdown.sh@28 -- # cat 00:25:49.309 22:24:44 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:49.309 22:24:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.309 22:24:44 -- common/autotest_common.sh@10 -- # set +x 00:25:49.309 Malloc1 00:25:49.309 [2024-07-24 22:24:44.306272] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:49.309 Malloc2 00:25:49.309 Malloc3 00:25:49.309 Malloc4 00:25:49.568 Malloc5 00:25:49.569 Malloc6 00:25:49.569 Malloc7 00:25:49.569 Malloc8 00:25:49.569 Malloc9 00:25:49.569 Malloc10 00:25:49.828 22:24:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.828 22:24:44 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:49.828 22:24:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:49.828 22:24:44 -- common/autotest_common.sh@10 -- # set +x 00:25:49.828 22:24:44 -- target/shutdown.sh@124 -- # perfpid=3663792 00:25:49.828 22:24:44 -- target/shutdown.sh@125 -- # waitforlisten 3663792 /var/tmp/bdevperf.sock 00:25:49.828 22:24:44 -- common/autotest_common.sh@819 -- # '[' -z 3663792 ']' 00:25:49.828 22:24:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:49.828 22:24:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:49.828 22:24:44 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:49.828 22:24:44 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:49.828 22:24:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:49.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:49.828 22:24:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:49.828 22:24:44 -- nvmf/common.sh@520 -- # config=() 00:25:49.828 22:24:44 -- common/autotest_common.sh@10 -- # set +x 00:25:49.828 22:24:44 -- nvmf/common.sh@520 -- # local subsystem config 00:25:49.828 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.828 { 00:25:49.828 "params": { 00:25:49.828 "name": "Nvme$subsystem", 00:25:49.828 "trtype": "$TEST_TRANSPORT", 00:25:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.828 "adrfam": "ipv4", 00:25:49.828 "trsvcid": "$NVMF_PORT", 00:25:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.828 "hdgst": ${hdgst:-false}, 00:25:49.828 "ddgst": ${ddgst:-false} 00:25:49.828 }, 00:25:49.828 "method": "bdev_nvme_attach_controller" 00:25:49.828 } 00:25:49.828 EOF 00:25:49.828 )") 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.828 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.828 { 00:25:49.828 "params": { 00:25:49.828 "name": "Nvme$subsystem", 00:25:49.828 "trtype": "$TEST_TRANSPORT", 00:25:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.828 "adrfam": "ipv4", 00:25:49.828 "trsvcid": "$NVMF_PORT", 00:25:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.828 "hdgst": ${hdgst:-false}, 00:25:49.828 "ddgst": ${ddgst:-false} 00:25:49.828 }, 00:25:49.828 "method": "bdev_nvme_attach_controller" 00:25:49.828 } 00:25:49.828 EOF 00:25:49.828 )") 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.828 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.828 { 00:25:49.828 "params": { 00:25:49.828 "name": "Nvme$subsystem", 00:25:49.828 "trtype": "$TEST_TRANSPORT", 00:25:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.828 "adrfam": "ipv4", 00:25:49.828 "trsvcid": "$NVMF_PORT", 00:25:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.828 "hdgst": ${hdgst:-false}, 00:25:49.828 "ddgst": ${ddgst:-false} 00:25:49.828 }, 00:25:49.828 "method": "bdev_nvme_attach_controller" 00:25:49.828 } 00:25:49.828 EOF 00:25:49.828 )") 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.828 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.828 { 00:25:49.828 "params": { 00:25:49.828 "name": "Nvme$subsystem", 00:25:49.828 "trtype": "$TEST_TRANSPORT", 00:25:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.828 "adrfam": "ipv4", 00:25:49.828 "trsvcid": "$NVMF_PORT", 00:25:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.828 "hdgst": ${hdgst:-false}, 00:25:49.828 "ddgst": ${ddgst:-false} 00:25:49.828 }, 00:25:49.828 "method": "bdev_nvme_attach_controller" 00:25:49.828 } 00:25:49.828 EOF 00:25:49.828 )") 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.828 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.828 { 00:25:49.828 "params": { 00:25:49.828 "name": "Nvme$subsystem", 00:25:49.828 "trtype": "$TEST_TRANSPORT", 00:25:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.828 "adrfam": "ipv4", 00:25:49.828 "trsvcid": "$NVMF_PORT", 00:25:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.828 "hdgst": ${hdgst:-false}, 00:25:49.828 "ddgst": ${ddgst:-false} 00:25:49.828 }, 00:25:49.828 "method": "bdev_nvme_attach_controller" 00:25:49.828 } 00:25:49.828 EOF 00:25:49.828 )") 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.828 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.828 { 00:25:49.828 "params": { 00:25:49.828 "name": "Nvme$subsystem", 00:25:49.828 "trtype": "$TEST_TRANSPORT", 00:25:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.828 "adrfam": "ipv4", 00:25:49.828 "trsvcid": "$NVMF_PORT", 00:25:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.828 "hdgst": ${hdgst:-false}, 00:25:49.828 "ddgst": ${ddgst:-false} 00:25:49.828 }, 00:25:49.828 "method": "bdev_nvme_attach_controller" 00:25:49.828 } 00:25:49.828 EOF 00:25:49.828 )") 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.828 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.828 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.828 { 00:25:49.828 "params": { 00:25:49.828 "name": "Nvme$subsystem", 00:25:49.828 "trtype": "$TEST_TRANSPORT", 00:25:49.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.828 "adrfam": "ipv4", 00:25:49.828 "trsvcid": "$NVMF_PORT", 00:25:49.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.829 "hdgst": ${hdgst:-false}, 00:25:49.829 "ddgst": ${ddgst:-false} 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 } 00:25:49.829 EOF 00:25:49.829 )") 00:25:49.829 [2024-07-24 22:24:44.782036] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:25:49.829 [2024-07-24 22:24:44.782092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663792 ] 00:25:49.829 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.829 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.829 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.829 { 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme$subsystem", 00:25:49.829 "trtype": "$TEST_TRANSPORT", 00:25:49.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "$NVMF_PORT", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.829 "hdgst": ${hdgst:-false}, 00:25:49.829 "ddgst": ${ddgst:-false} 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 } 00:25:49.829 EOF 00:25:49.829 )") 00:25:49.829 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.829 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.829 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.829 { 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme$subsystem", 00:25:49.829 "trtype": "$TEST_TRANSPORT", 00:25:49.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "$NVMF_PORT", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.829 "hdgst": ${hdgst:-false}, 00:25:49.829 "ddgst": ${ddgst:-false} 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 } 00:25:49.829 EOF 00:25:49.829 )") 00:25:49.829 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.829 22:24:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.829 22:24:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.829 { 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme$subsystem", 00:25:49.829 "trtype": "$TEST_TRANSPORT", 00:25:49.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "$NVMF_PORT", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.829 "hdgst": ${hdgst:-false}, 00:25:49.829 "ddgst": ${ddgst:-false} 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 } 00:25:49.829 EOF 00:25:49.829 )") 00:25:49.829 22:24:44 -- nvmf/common.sh@542 -- # cat 00:25:49.829 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.829 22:24:44 -- nvmf/common.sh@544 -- # jq . 00:25:49.829 22:24:44 -- nvmf/common.sh@545 -- # IFS=, 00:25:49.829 22:24:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme1", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 },{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme2", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 },{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme3", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 },{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme4", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 },{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme5", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 },{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme6", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 },{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme7", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 },{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme8", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 },{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme9", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 },{ 00:25:49.829 "params": { 00:25:49.829 "name": "Nvme10", 00:25:49.829 "trtype": "tcp", 00:25:49.829 "traddr": "10.0.0.2", 00:25:49.829 "adrfam": "ipv4", 00:25:49.829 "trsvcid": "4420", 00:25:49.829 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:49.829 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:49.829 "hdgst": false, 00:25:49.829 "ddgst": false 00:25:49.829 }, 00:25:49.829 "method": "bdev_nvme_attach_controller" 00:25:49.829 }' 00:25:49.829 [2024-07-24 22:24:44.838250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.829 [2024-07-24 22:24:44.876197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.736 Running I/O for 10 seconds... 00:25:52.012 22:24:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:52.012 22:24:46 -- common/autotest_common.sh@852 -- # return 0 00:25:52.012 22:24:46 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:52.012 22:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.012 22:24:46 -- common/autotest_common.sh@10 -- # set +x 00:25:52.012 22:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.012 22:24:46 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:52.012 22:24:46 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:52.012 22:24:46 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:52.012 22:24:46 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:52.012 22:24:46 -- target/shutdown.sh@57 -- # local ret=1 00:25:52.012 22:24:46 -- target/shutdown.sh@58 -- # local i 00:25:52.012 22:24:46 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:52.012 22:24:46 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:52.012 22:24:46 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:52.012 22:24:46 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:52.012 22:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.012 22:24:46 -- common/autotest_common.sh@10 -- # set +x 00:25:52.012 22:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.012 22:24:47 -- target/shutdown.sh@60 -- # read_io_count=211 00:25:52.012 22:24:47 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:25:52.012 22:24:47 -- target/shutdown.sh@64 -- # ret=0 00:25:52.012 22:24:47 -- target/shutdown.sh@65 -- # break 00:25:52.012 22:24:47 -- target/shutdown.sh@69 -- # return 0 00:25:52.012 22:24:47 -- target/shutdown.sh@134 -- # killprocess 3663510 00:25:52.012 22:24:47 -- common/autotest_common.sh@926 -- # '[' -z 3663510 ']' 00:25:52.012 22:24:47 -- common/autotest_common.sh@930 -- # kill -0 3663510 00:25:52.012 22:24:47 -- common/autotest_common.sh@931 -- # uname 00:25:52.012 22:24:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:52.012 22:24:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3663510 00:25:52.012 22:24:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:52.012 22:24:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:52.012 22:24:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3663510' 00:25:52.012 killing process with pid 3663510 00:25:52.012 22:24:47 -- common/autotest_common.sh@945 -- # kill 3663510 00:25:52.012 22:24:47 -- common/autotest_common.sh@950 -- # wait 3663510 00:25:52.012 [2024-07-24 22:24:47.058914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.058960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.058977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.058983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.058990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.058996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.012 [2024-07-24 22:24:47.059285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.059370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2077cd0 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.013 [2024-07-24 22:24:47.060697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.013 [2024-07-24 22:24:47.060727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.013 [2024-07-24 22:24:47.060734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 22:24:47.060742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.013 the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with [2024-07-24 22:24:47.060752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:25:52.013 id:0 cdw10:00000000 cdw11:00000000 00:25:52.013 [2024-07-24 22:24:47.060761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with [2024-07-24 22:24:47.060761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:25:52.013 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.013 [2024-07-24 22:24:47.060771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with [2024-07-24 22:24:47.060772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:25:52.013 id:0 cdw10:00000000 cdw11:00000000 00:25:52.013 [2024-07-24 22:24:47.060780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with [2024-07-24 22:24:47.060781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:25:52.013 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.013 [2024-07-24 22:24:47.060789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa5320 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.060869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a640 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.064918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078180 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.013 [2024-07-24 22:24:47.065989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.065995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.066323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078630 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.014 [2024-07-24 22:24:47.067444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.067733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078ac0 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.015 [2024-07-24 22:24:47.068612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.068784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078f50 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.016 [2024-07-24 22:24:47.069775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20793e0 is same with the state(5) to be set 00:25:52.017 [2024-07-24 22:24:47.069999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.017 [2024-07-24 22:24:47.070443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.017 [2024-07-24 22:24:47.070451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.018 [2024-07-24 22:24:47.070976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.018 [2024-07-24 22:24:47.070983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.070989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.018 [2024-07-24 22:24:47.070992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.070997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.018 [2024-07-24 22:24:47.071000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.018 [2024-07-24 22:24:47.071005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.018 [2024-07-24 22:24:47.071009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.018 [2024-07-24 22:24:47.071014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.018 [2024-07-24 22:24:47.071017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.019 [2024-07-24 22:24:47.071029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071395] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079870 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071433] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a9c250 was disconnected and freed. reset controller. 00:25:52.019 [2024-07-24 22:24:47.071492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad1560 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acf530 is same with the state(5) to be set 00:25:52.019 [2024-07-24 22:24:47.071652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.019 [2024-07-24 22:24:47.071676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.019 [2024-07-24 22:24:47.071683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b87900 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.071731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071787] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85130 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.071816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9dd10 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.071887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa5320 (9): Bad file descriptor 00:25:52.020 [2024-07-24 22:24:47.071911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.071966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b874d0 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.071988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.071996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.072002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.072009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.072016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.072022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.072029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.072036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.072047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad0860 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.072243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 22:24:47.072251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.072267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.072274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.072281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.072289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.020 [2024-07-24 22:24:47.072296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.020 [2024-07-24 22:24:47.072304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a30 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072374] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:52.020 [2024-07-24 22:24:47.072375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.020 [2024-07-24 22:24:47.072397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with [2024-07-24 22:24:47.072402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34560 len:1the state(5) to be set 00:25:52.021 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34688 len:1[2024-07-24 22:24:47.072427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34944 len:12[2024-07-24 22:24:47.072464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:24:47.072473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29184 len:128[2024-07-24 22:24:47.072519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:24:47.072528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29824 len:12[2024-07-24 22:24:47.072573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with [2024-07-24 22:24:47.072581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:52.021 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35328 len:12[2024-07-24 22:24:47.072614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 22:24:47.072624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2079d00 is same with the state(5) to be set 00:25:52.021 [2024-07-24 22:24:47.072660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.021 [2024-07-24 22:24:47.072724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.021 [2024-07-24 22:24:47.072733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.072992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.072999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.073007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.073014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.073022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.022 [2024-07-24 22:24:47.073028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.022 [2024-07-24 22:24:47.073219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.073963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.022 [2024-07-24 22:24:47.074633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.074677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.074721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.074763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.074807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.074851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.074897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.074947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.074995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.075039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.075088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.075135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.075177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.075229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.075274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.075317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.075360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207a190 is same with the state(5) to be set 00:25:52.023 [2024-07-24 22:24:47.086884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.086905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.086916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.086925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.086937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.086946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.086957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.086966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.086978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.086990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.023 [2024-07-24 22:24:47.087448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.087528] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c3e9a0 was disconnected and freed. reset controller. 00:25:52.023 [2024-07-24 22:24:47.089285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1560 (9): Bad file descriptor 00:25:52.023 [2024-07-24 22:24:47.089322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acf530 (9): Bad file descriptor 00:25:52.023 [2024-07-24 22:24:47.089343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b87900 (9): Bad file descriptor 00:25:52.023 [2024-07-24 22:24:47.089363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b85130 (9): Bad file descriptor 00:25:52.023 [2024-07-24 22:24:47.089403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.023 [2024-07-24 22:24:47.089416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.089427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.023 [2024-07-24 22:24:47.089440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.089451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.023 [2024-07-24 22:24:47.089460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.023 [2024-07-24 22:24:47.089470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:52.024 [2024-07-24 22:24:47.089479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.089488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70600 is same with the state(5) to be set 00:25:52.024 [2024-07-24 22:24:47.089509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9dd10 (9): Bad file descriptor 00:25:52.024 [2024-07-24 22:24:47.089531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b874d0 (9): Bad file descriptor 00:25:52.024 [2024-07-24 22:24:47.089550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad0860 (9): Bad file descriptor 00:25:52.024 [2024-07-24 22:24:47.089572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70a30 (9): Bad file descriptor 00:25:52.024 [2024-07-24 22:24:47.091334] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:52.024 [2024-07-24 22:24:47.091369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:52.024 [2024-07-24 22:24:47.091444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.091982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.091993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.024 [2024-07-24 22:24:47.092206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.024 [2024-07-24 22:24:47.092219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.025 [2024-07-24 22:24:47.092819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.025 [2024-07-24 22:24:47.092829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ce00 is same with the state(5) to be set 00:25:52.025 [2024-07-24 22:24:47.094806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.025 [2024-07-24 22:24:47.095281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.025 [2024-07-24 22:24:47.095640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.025 [2024-07-24 22:24:47.095657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9dd10 with addr=10.0.0.2, port=4420 00:25:52.025 [2024-07-24 22:24:47.095669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9dd10 is same with the state(5) to be set 00:25:52.025 [2024-07-24 22:24:47.096087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.025 [2024-07-24 22:24:47.096451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.025 [2024-07-24 22:24:47.096466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad1560 with addr=10.0.0.2, port=4420 00:25:52.025 [2024-07-24 22:24:47.096476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad1560 is same with the state(5) to be set 00:25:52.025 [2024-07-24 22:24:47.096840] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:52.025 [2024-07-24 22:24:47.096896] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:52.025 [2024-07-24 22:24:47.096958] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:52.025 [2024-07-24 22:24:47.097782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.025 [2024-07-24 22:24:47.098155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.025 [2024-07-24 22:24:47.098172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa5320 with addr=10.0.0.2, port=4420 00:25:52.025 [2024-07-24 22:24:47.098183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa5320 is same with the state(5) to be set 00:25:52.025 [2024-07-24 22:24:47.098199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9dd10 (9): Bad file descriptor 00:25:52.026 [2024-07-24 22:24:47.098213] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1560 (9): Bad file descriptor 00:25:52.026 [2024-07-24 22:24:47.098600] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:52.026 [2024-07-24 22:24:47.098656] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:52.026 [2024-07-24 22:24:47.098707] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:52.026 [2024-07-24 22:24:47.098742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.098987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.098999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.026 [2024-07-24 22:24:47.099515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.026 [2024-07-24 22:24:47.099527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.099536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.099548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.099559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.099570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.099580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.099591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.099601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.099613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.099622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.099634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.099643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.099655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.099664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.099742] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a9ac70 was disconnected and freed. reset controller. 00:25:52.027 [2024-07-24 22:24:47.099781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa5320 (9): Bad file descriptor 00:25:52.027 [2024-07-24 22:24:47.099795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:52.027 [2024-07-24 22:24:47.099804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:52.027 [2024-07-24 22:24:47.099815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:52.027 [2024-07-24 22:24:47.099831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:52.027 [2024-07-24 22:24:47.099839] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:52.027 [2024-07-24 22:24:47.099849] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:52.027 [2024-07-24 22:24:47.099900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70600 (9): Bad file descriptor 00:25:52.027 [2024-07-24 22:24:47.101164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.027 [2024-07-24 22:24:47.101181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.027 [2024-07-24 22:24:47.101217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.027 [2024-07-24 22:24:47.101230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.027 [2024-07-24 22:24:47.101239] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.027 [2024-07-24 22:24:47.101284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.027 [2024-07-24 22:24:47.101863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.027 [2024-07-24 22:24:47.101873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.101885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.101894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.101906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.101915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.101927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.101937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.101949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.101959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.101971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.101980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.101992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.102678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.102689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4ad90 is same with the state(5) to be set 00:25:52.028 [2024-07-24 22:24:47.104058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.104071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.028 [2024-07-24 22:24:47.104083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.028 [2024-07-24 22:24:47.104091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.029 [2024-07-24 22:24:47.104808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.029 [2024-07-24 22:24:47.104819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.104985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.104997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.105224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.105233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a93f50 is same with the state(5) to be set 00:25:52.030 [2024-07-24 22:24:47.106369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.030 [2024-07-24 22:24:47.106553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.030 [2024-07-24 22:24:47.106563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.106982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.106992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.107231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.107238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.114419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.114443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.031 [2024-07-24 22:24:47.114455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.031 [2024-07-24 22:24:47.114464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.114778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.114789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a95530 is same with the state(5) to be set 00:25:52.032 [2024-07-24 22:24:47.116151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.032 [2024-07-24 22:24:47.116689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.032 [2024-07-24 22:24:47.116701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.116980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.116991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.033 [2024-07-24 22:24:47.117575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.033 [2024-07-24 22:24:47.117585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.117595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a96b10 is same with the state(5) to be set 00:25:52.034 [2024-07-24 22:24:47.118960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.118978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.118992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.034 [2024-07-24 22:24:47.119760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.034 [2024-07-24 22:24:47.119772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.119980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.119991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.120407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.120418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a980f0 is same with the state(5) to be set 00:25:52.035 [2024-07-24 22:24:47.121787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.121804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.121819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.121829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.121841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.121858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.121871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.121880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.121892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.121903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.121915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.121924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.121937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.121947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.035 [2024-07-24 22:24:47.121959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.035 [2024-07-24 22:24:47.121969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.121981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.121992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.036 [2024-07-24 22:24:47.122842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.036 [2024-07-24 22:24:47.122854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.122864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.122876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.122886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.122899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.122909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.122922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.122932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.122944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.122954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.122967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.122978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.122990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.037 [2024-07-24 22:24:47.123227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:52.037 [2024-07-24 22:24:47.123237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a996d0 is same with the state(5) to be set 00:25:52.037 [2024-07-24 22:24:47.124748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:52.037 [2024-07-24 22:24:47.124773] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.037 [2024-07-24 22:24:47.124782] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:52.037 [2024-07-24 22:24:47.124793] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:52.037 [2024-07-24 22:24:47.124862] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:52.037 [2024-07-24 22:24:47.124878] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:52.037 [2024-07-24 22:24:47.124889] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:52.037 [2024-07-24 22:24:47.124900] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:52.037 [2024-07-24 22:24:47.125199] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:52.037 [2024-07-24 22:24:47.125211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:52.037 [2024-07-24 22:24:47.125220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:52.299 task offset: 34816 on job bdev=Nvme10n1 fails 00:25:52.299 00:25:52.299 Latency(us) 00:25:52.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.299 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme1n1 ended in about 0.71 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme1n1 : 0.71 352.13 22.01 89.79 0.00 143946.54 71576.71 142241.61 00:25:52.299 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme2n1 ended in about 0.71 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme2n1 : 0.71 353.74 22.11 90.20 0.00 141951.43 20173.69 160477.72 00:25:52.299 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme3n1 ended in about 0.72 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme3n1 : 0.72 347.34 21.71 88.56 0.00 143295.98 77503.44 135858.98 00:25:52.299 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme4n1 ended in about 0.73 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme4n1 : 0.73 408.21 25.51 88.26 0.00 124616.55 27468.13 112152.04 00:25:52.299 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme5n1 ended in about 0.73 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme5n1 : 0.73 396.07 24.75 87.11 0.00 126918.15 77503.44 98019.06 00:25:52.299 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme6n1 ended in about 0.74 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme6n1 : 0.74 402.71 25.17 86.78 0.00 124166.53 14588.88 115799.26 00:25:52.299 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme7n1 ended in about 0.74 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme7n1 : 0.74 401.17 25.07 86.45 0.00 123499.72 23820.91 104401.70 00:25:52.299 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme8n1 ended in about 0.74 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme8n1 : 0.74 391.58 24.47 86.12 0.00 124923.74 61546.85 104401.70 00:25:52.299 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme9n1 ended in about 0.72 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme9n1 : 0.72 326.46 20.40 58.35 0.00 153059.68 2877.89 139506.20 00:25:52.299 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:52.299 Job: Nvme10n1 ended in about 0.71 seconds with error 00:25:52.299 Verification LBA range: start 0x0 length 0x400 00:25:52.299 Nvme10n1 : 0.71 354.53 22.16 90.40 0.00 131094.43 36472.21 118534.68 00:25:52.299 =================================================================================================================== 00:25:52.299 Total : 3733.93 233.37 852.01 0.00 132941.34 2877.89 160477.72 00:25:52.299 [2024-07-24 22:24:47.147228] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:52.299 [2024-07-24 22:24:47.147270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:52.299 [2024-07-24 22:24:47.147787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.148273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.148287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c70600 with addr=10.0.0.2, port=4420 00:25:52.299 [2024-07-24 22:24:47.148297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70600 is same with the state(5) to be set 00:25:52.299 [2024-07-24 22:24:47.148760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.149123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.149135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acf530 with addr=10.0.0.2, port=4420 00:25:52.299 [2024-07-24 22:24:47.149142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acf530 is same with the state(5) to be set 00:25:52.299 [2024-07-24 22:24:47.149610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.150122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.150133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b87900 with addr=10.0.0.2, port=4420 00:25:52.299 [2024-07-24 22:24:47.150141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b87900 is same with the state(5) to be set 00:25:52.299 [2024-07-24 22:24:47.151486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:52.299 [2024-07-24 22:24:47.152054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.152466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.152478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b874d0 with addr=10.0.0.2, port=4420 00:25:52.299 [2024-07-24 22:24:47.152486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b874d0 is same with the state(5) to be set 00:25:52.299 [2024-07-24 22:24:47.152896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.153375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.153389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b85130 with addr=10.0.0.2, port=4420 00:25:52.299 [2024-07-24 22:24:47.153396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85130 is same with the state(5) to be set 00:25:52.299 [2024-07-24 22:24:47.153742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.154168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.154182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad0860 with addr=10.0.0.2, port=4420 00:25:52.299 [2024-07-24 22:24:47.154191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad0860 is same with the state(5) to be set 00:25:52.299 [2024-07-24 22:24:47.154532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.154935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.154948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c70a30 with addr=10.0.0.2, port=4420 00:25:52.299 [2024-07-24 22:24:47.154956] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a30 is same with the state(5) to be set 00:25:52.299 [2024-07-24 22:24:47.154973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70600 (9): Bad file descriptor 00:25:52.299 [2024-07-24 22:24:47.154985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acf530 (9): Bad file descriptor 00:25:52.299 [2024-07-24 22:24:47.154995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b87900 (9): Bad file descriptor 00:25:52.299 [2024-07-24 22:24:47.155023] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:52.299 [2024-07-24 22:24:47.155035] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:52.299 [2024-07-24 22:24:47.155056] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:52.299 [2024-07-24 22:24:47.155068] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:52.299 [2024-07-24 22:24:47.155079] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:52.299 [2024-07-24 22:24:47.155153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:52.299 [2024-07-24 22:24:47.155165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.299 [2024-07-24 22:24:47.155621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.299 [2024-07-24 22:24:47.156039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.156055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad1560 with addr=10.0.0.2, port=4420 00:25:52.300 [2024-07-24 22:24:47.156064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad1560 is same with the state(5) to be set 00:25:52.300 [2024-07-24 22:24:47.156074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b874d0 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.156084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b85130 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.156094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad0860 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.156104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70a30 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.156113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.156120] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.156129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:52.300 [2024-07-24 22:24:47.156142] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.156149] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.156157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:52.300 [2024-07-24 22:24:47.156167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.156174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.156186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:52.300 [2024-07-24 22:24:47.156254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.156263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.156269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.156748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.157241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.157256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9dd10 with addr=10.0.0.2, port=4420 00:25:52.300 [2024-07-24 22:24:47.157265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9dd10 is same with the state(5) to be set 00:25:52.300 [2024-07-24 22:24:47.157741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.158196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.158210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa5320 with addr=10.0.0.2, port=4420 00:25:52.300 [2024-07-24 22:24:47.158218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa5320 is same with the state(5) to be set 00:25:52.300 [2024-07-24 22:24:47.158229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1560 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.158238] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.158246] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.158254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:52.300 [2024-07-24 22:24:47.158264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.158271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.158278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:52.300 [2024-07-24 22:24:47.158289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.158296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.158304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:52.300 [2024-07-24 22:24:47.158313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.158320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.158327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:52.300 [2024-07-24 22:24:47.158365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.158374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.158380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.158387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.158395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9dd10 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.158405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa5320 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.158413] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.158423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.158430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:52.300 [2024-07-24 22:24:47.158475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.158493] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.158502] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.158509] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:52.300 [2024-07-24 22:24:47.158517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.158524] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.158531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.300 [2024-07-24 22:24:47.158557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:52.300 [2024-07-24 22:24:47.158568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:52.300 [2024-07-24 22:24:47.158577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:52.300 [2024-07-24 22:24:47.158587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:52.300 [2024-07-24 22:24:47.158595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.158602] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.300 [2024-07-24 22:24:47.159142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.159503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.159517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c70a30 with addr=10.0.0.2, port=4420 00:25:52.300 [2024-07-24 22:24:47.159526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70a30 is same with the state(5) to be set 00:25:52.300 [2024-07-24 22:24:47.159882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.160359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.160372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad0860 with addr=10.0.0.2, port=4420 00:25:52.300 [2024-07-24 22:24:47.160380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad0860 is same with the state(5) to be set 00:25:52.300 [2024-07-24 22:24:47.160794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.161209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.161223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b85130 with addr=10.0.0.2, port=4420 00:25:52.300 [2024-07-24 22:24:47.161231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85130 is same with the state(5) to be set 00:25:52.300 [2024-07-24 22:24:47.161707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.162188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.300 [2024-07-24 22:24:47.162203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b874d0 with addr=10.0.0.2, port=4420 00:25:52.300 [2024-07-24 22:24:47.162211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b874d0 is same with the state(5) to be set 00:25:52.300 [2024-07-24 22:24:47.162246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70a30 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.162257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad0860 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.162267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b85130 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.162277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b874d0 (9): Bad file descriptor 00:25:52.300 [2024-07-24 22:24:47.162345] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.162355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.162363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:52.300 [2024-07-24 22:24:47.162373] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.162381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.162388] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:52.300 [2024-07-24 22:24:47.162396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:52.300 [2024-07-24 22:24:47.162403] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:52.300 [2024-07-24 22:24:47.162410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:52.300 [2024-07-24 22:24:47.162419] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:52.301 [2024-07-24 22:24:47.162426] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:52.301 [2024-07-24 22:24:47.162433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:52.301 [2024-07-24 22:24:47.162459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:52.301 [2024-07-24 22:24:47.162469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:52.301 [2024-07-24 22:24:47.162479] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:52.301 [2024-07-24 22:24:47.162488] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:52.301 [2024-07-24 22:24:47.162496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.301 [2024-07-24 22:24:47.162503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.301 [2024-07-24 22:24:47.162538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.301 [2024-07-24 22:24:47.162547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.301 [2024-07-24 22:24:47.163057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.163478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.163493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b87900 with addr=10.0.0.2, port=4420 00:25:52.301 [2024-07-24 22:24:47.163501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b87900 is same with the state(5) to be set 00:25:52.301 [2024-07-24 22:24:47.163998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.164466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.164482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acf530 with addr=10.0.0.2, port=4420 00:25:52.301 [2024-07-24 22:24:47.164497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acf530 is same with the state(5) to be set 00:25:52.301 [2024-07-24 22:24:47.164935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.165392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.165407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c70600 with addr=10.0.0.2, port=4420 00:25:52.301 [2024-07-24 22:24:47.165418] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c70600 is same with the state(5) to be set 00:25:52.301 [2024-07-24 22:24:47.165761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.166239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.166256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ad1560 with addr=10.0.0.2, port=4420 00:25:52.301 [2024-07-24 22:24:47.166267] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad1560 is same with the state(5) to be set 00:25:52.301 [2024-07-24 22:24:47.166307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b87900 (9): Bad file descriptor 00:25:52.301 [2024-07-24 22:24:47.166322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acf530 (9): Bad file descriptor 00:25:52.301 [2024-07-24 22:24:47.166335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c70600 (9): Bad file descriptor 00:25:52.301 [2024-07-24 22:24:47.166346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ad1560 (9): Bad file descriptor 00:25:52.301 [2024-07-24 22:24:47.166410] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:52.301 [2024-07-24 22:24:47.166423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:52.301 [2024-07-24 22:24:47.166433] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:52.301 [2024-07-24 22:24:47.166445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:52.301 [2024-07-24 22:24:47.166454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:52.301 [2024-07-24 22:24:47.166463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:52.301 [2024-07-24 22:24:47.166475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:52.301 [2024-07-24 22:24:47.166484] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:52.301 [2024-07-24 22:24:47.166494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:52.301 [2024-07-24 22:24:47.166506] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:52.301 [2024-07-24 22:24:47.166514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:52.301 [2024-07-24 22:24:47.166523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:52.301 [2024-07-24 22:24:47.166556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.301 [2024-07-24 22:24:47.166569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:52.301 [2024-07-24 22:24:47.166581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.301 [2024-07-24 22:24:47.166589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.301 [2024-07-24 22:24:47.166598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.301 [2024-07-24 22:24:47.166606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.301 [2024-07-24 22:24:47.167072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.167512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.167527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aa5320 with addr=10.0.0.2, port=4420 00:25:52.301 [2024-07-24 22:24:47.167536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa5320 is same with the state(5) to be set 00:25:52.301 [2024-07-24 22:24:47.167831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.168196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.301 [2024-07-24 22:24:47.168211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a9dd10 with addr=10.0.0.2, port=4420 00:25:52.301 [2024-07-24 22:24:47.168221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9dd10 is same with the state(5) to be set 00:25:52.301 [2024-07-24 22:24:47.168255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa5320 (9): Bad file descriptor 00:25:52.301 [2024-07-24 22:24:47.168268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9dd10 (9): Bad file descriptor 00:25:52.301 [2024-07-24 22:24:47.168302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.301 [2024-07-24 22:24:47.168312] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.301 [2024-07-24 22:24:47.168321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.301 [2024-07-24 22:24:47.168333] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:52.301 [2024-07-24 22:24:47.168342] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:52.301 [2024-07-24 22:24:47.168351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:52.301 [2024-07-24 22:24:47.168382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.301 [2024-07-24 22:24:47.168392] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.561 22:24:47 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:52.561 22:24:47 -- target/shutdown.sh@138 -- # sleep 1 00:25:53.499 22:24:48 -- target/shutdown.sh@141 -- # kill -9 3663792 00:25:53.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3663792) - No such process 00:25:53.499 22:24:48 -- target/shutdown.sh@141 -- # true 00:25:53.499 22:24:48 -- target/shutdown.sh@143 -- # stoptarget 00:25:53.499 22:24:48 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:53.499 22:24:48 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:53.499 22:24:48 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:53.499 22:24:48 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:53.499 22:24:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:53.499 22:24:48 -- nvmf/common.sh@116 -- # sync 00:25:53.499 22:24:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:53.499 22:24:48 -- nvmf/common.sh@119 -- # set +e 00:25:53.499 22:24:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:53.499 22:24:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:53.499 rmmod nvme_tcp 00:25:53.499 rmmod nvme_fabrics 00:25:53.499 rmmod nvme_keyring 00:25:53.499 22:24:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:53.499 22:24:48 -- nvmf/common.sh@123 -- # set -e 00:25:53.499 22:24:48 -- nvmf/common.sh@124 -- # return 0 00:25:53.499 22:24:48 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:53.499 22:24:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:53.499 22:24:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:53.499 22:24:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:53.499 22:24:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:53.499 22:24:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:53.499 22:24:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.500 22:24:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.500 22:24:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.031 22:24:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:56.031 00:25:56.031 real 0m7.547s 00:25:56.031 user 0m18.553s 00:25:56.031 sys 0m1.255s 00:25:56.031 22:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:56.031 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:25:56.031 ************************************ 00:25:56.031 END TEST nvmf_shutdown_tc3 00:25:56.031 ************************************ 00:25:56.031 22:24:50 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:56.031 00:25:56.031 real 0m31.245s 00:25:56.031 user 1m20.501s 00:25:56.031 sys 0m8.223s 00:25:56.031 22:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:56.031 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:25:56.031 ************************************ 00:25:56.031 END TEST nvmf_shutdown 00:25:56.031 ************************************ 00:25:56.031 22:24:50 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:25:56.031 22:24:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:56.031 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:25:56.031 22:24:50 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:25:56.031 22:24:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:56.031 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:25:56.031 22:24:50 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:25:56.031 22:24:50 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:56.031 22:24:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:56.031 22:24:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:56.031 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:25:56.031 ************************************ 00:25:56.031 START TEST nvmf_multicontroller 00:25:56.031 ************************************ 00:25:56.031 22:24:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:56.031 * Looking for test storage... 00:25:56.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:56.031 22:24:50 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:56.031 22:24:50 -- nvmf/common.sh@7 -- # uname -s 00:25:56.031 22:24:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:56.031 22:24:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:56.031 22:24:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:56.031 22:24:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:56.031 22:24:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:56.031 22:24:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:56.031 22:24:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:56.031 22:24:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:56.032 22:24:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:56.032 22:24:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:56.032 22:24:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:56.032 22:24:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:56.032 22:24:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:56.032 22:24:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:56.032 22:24:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:56.032 22:24:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:56.032 22:24:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:56.032 22:24:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:56.032 22:24:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:56.032 22:24:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.032 22:24:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.032 22:24:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.032 22:24:50 -- paths/export.sh@5 -- # export PATH 00:25:56.032 22:24:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:56.032 22:24:50 -- nvmf/common.sh@46 -- # : 0 00:25:56.032 22:24:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:56.032 22:24:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:56.032 22:24:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:56.032 22:24:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:56.032 22:24:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:56.032 22:24:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:56.032 22:24:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:56.032 22:24:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:56.032 22:24:50 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:56.032 22:24:50 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:56.032 22:24:50 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:56.032 22:24:50 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:56.032 22:24:50 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:56.032 22:24:50 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:56.032 22:24:50 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:56.032 22:24:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:56.032 22:24:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:56.032 22:24:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:56.032 22:24:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:56.032 22:24:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:56.032 22:24:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.032 22:24:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:56.032 22:24:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.032 22:24:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:56.032 22:24:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:56.032 22:24:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:56.032 22:24:50 -- common/autotest_common.sh@10 -- # set +x 00:26:01.384 22:24:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:01.384 22:24:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:01.384 22:24:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:01.384 22:24:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:01.384 22:24:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:01.384 22:24:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:01.384 22:24:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:01.384 22:24:55 -- nvmf/common.sh@294 -- # net_devs=() 00:26:01.384 22:24:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:01.384 22:24:55 -- nvmf/common.sh@295 -- # e810=() 00:26:01.384 22:24:55 -- nvmf/common.sh@295 -- # local -ga e810 00:26:01.384 22:24:55 -- nvmf/common.sh@296 -- # x722=() 00:26:01.384 22:24:55 -- nvmf/common.sh@296 -- # local -ga x722 00:26:01.384 22:24:55 -- nvmf/common.sh@297 -- # mlx=() 00:26:01.384 22:24:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:01.384 22:24:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:01.384 22:24:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:01.384 22:24:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:01.384 22:24:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:01.384 22:24:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:01.384 22:24:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:01.384 22:24:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:01.384 22:24:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:01.384 22:24:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:01.385 22:24:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:01.385 22:24:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:01.385 22:24:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:01.385 22:24:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:01.385 22:24:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:01.385 22:24:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:01.385 22:24:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:01.385 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:01.385 22:24:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:01.385 22:24:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:01.385 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:01.385 22:24:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:01.385 22:24:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:01.385 22:24:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.385 22:24:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:01.385 22:24:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.385 22:24:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:01.385 Found net devices under 0000:86:00.0: cvl_0_0 00:26:01.385 22:24:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.385 22:24:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:01.385 22:24:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:01.385 22:24:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:01.385 22:24:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:01.385 22:24:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:01.385 Found net devices under 0000:86:00.1: cvl_0_1 00:26:01.385 22:24:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:01.385 22:24:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:01.385 22:24:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:01.385 22:24:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:01.385 22:24:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:01.385 22:24:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:01.385 22:24:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:01.385 22:24:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:01.385 22:24:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:01.385 22:24:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:01.385 22:24:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:01.385 22:24:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:01.385 22:24:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:01.385 22:24:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:01.385 22:24:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:01.385 22:24:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:01.385 22:24:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:01.385 22:24:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.385 22:24:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.385 22:24:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.385 22:24:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:01.385 22:24:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.385 22:24:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.385 22:24:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.385 22:24:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:01.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:26:01.385 00:26:01.385 --- 10.0.0.2 ping statistics --- 00:26:01.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.385 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:26:01.385 22:24:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:26:01.385 00:26:01.385 --- 10.0.0.1 ping statistics --- 00:26:01.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.385 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:26:01.385 22:24:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.385 22:24:56 -- nvmf/common.sh@410 -- # return 0 00:26:01.385 22:24:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:01.385 22:24:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.385 22:24:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:01.385 22:24:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:01.385 22:24:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.385 22:24:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:01.385 22:24:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:01.385 22:24:56 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:01.385 22:24:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:01.385 22:24:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:01.385 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:26:01.385 22:24:56 -- nvmf/common.sh@469 -- # nvmfpid=3667835 00:26:01.385 22:24:56 -- nvmf/common.sh@470 -- # waitforlisten 3667835 00:26:01.385 22:24:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:01.385 22:24:56 -- common/autotest_common.sh@819 -- # '[' -z 3667835 ']' 00:26:01.385 22:24:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.385 22:24:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:01.385 22:24:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.385 22:24:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:01.385 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:26:01.385 [2024-07-24 22:24:56.091360] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:01.386 [2024-07-24 22:24:56.091408] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.386 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.386 [2024-07-24 22:24:56.150476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:01.386 [2024-07-24 22:24:56.187932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:01.386 [2024-07-24 22:24:56.188065] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.386 [2024-07-24 22:24:56.188074] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.386 [2024-07-24 22:24:56.188084] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.386 [2024-07-24 22:24:56.188218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.386 [2024-07-24 22:24:56.188308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.386 [2024-07-24 22:24:56.188309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.951 22:24:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:01.951 22:24:56 -- common/autotest_common.sh@852 -- # return 0 00:26:01.951 22:24:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:01.951 22:24:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:01.951 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 22:24:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.951 22:24:56 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:01.951 22:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 [2024-07-24 22:24:56.934595] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.951 22:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:56 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:01.951 22:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 Malloc0 00:26:01.951 22:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:56 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:01.951 22:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 22:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:56 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:01.951 22:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 22:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:56 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.951 22:24:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:56 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 [2024-07-24 22:24:56.996792] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.951 22:24:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:57 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:01.951 22:24:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:57 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 [2024-07-24 22:24:57.004739] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:01.951 22:24:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:57 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:01.951 22:24:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:57 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 Malloc1 00:26:01.951 22:24:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:57 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:01.951 22:24:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:57 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 22:24:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:57 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:01.951 22:24:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:57 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 22:24:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:57 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:01.951 22:24:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:57 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 22:24:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:57 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:01.951 22:24:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:01.951 22:24:57 -- common/autotest_common.sh@10 -- # set +x 00:26:01.951 22:24:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:01.951 22:24:57 -- host/multicontroller.sh@44 -- # bdevperf_pid=3667892 00:26:01.951 22:24:57 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:01.951 22:24:57 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:01.951 22:24:57 -- host/multicontroller.sh@47 -- # waitforlisten 3667892 /var/tmp/bdevperf.sock 00:26:01.951 22:24:57 -- common/autotest_common.sh@819 -- # '[' -z 3667892 ']' 00:26:01.951 22:24:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:01.951 22:24:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:01.951 22:24:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:01.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:01.951 22:24:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:01.951 22:24:57 -- common/autotest_common.sh@10 -- # set +x 00:26:02.884 22:24:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:02.884 22:24:57 -- common/autotest_common.sh@852 -- # return 0 00:26:02.884 22:24:57 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:02.884 22:24:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:02.884 22:24:57 -- common/autotest_common.sh@10 -- # set +x 00:26:03.142 NVMe0n1 00:26:03.142 22:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.142 22:24:58 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.142 22:24:58 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:03.142 22:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.142 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:26:03.142 22:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.142 1 00:26:03.142 22:24:58 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:03.142 22:24:58 -- common/autotest_common.sh@640 -- # local es=0 00:26:03.142 22:24:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:03.142 22:24:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:03.142 22:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:03.142 22:24:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:03.142 22:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:03.142 22:24:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:03.142 22:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.142 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:26:03.142 request: 00:26:03.142 { 00:26:03.142 "name": "NVMe0", 00:26:03.142 "trtype": "tcp", 00:26:03.142 "traddr": "10.0.0.2", 00:26:03.142 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:03.142 "hostaddr": "10.0.0.2", 00:26:03.142 "hostsvcid": "60000", 00:26:03.142 "adrfam": "ipv4", 00:26:03.142 "trsvcid": "4420", 00:26:03.142 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.142 "method": "bdev_nvme_attach_controller", 00:26:03.142 "req_id": 1 00:26:03.142 } 00:26:03.142 Got JSON-RPC error response 00:26:03.142 response: 00:26:03.142 { 00:26:03.142 "code": -114, 00:26:03.142 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:03.142 } 00:26:03.142 22:24:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:03.142 22:24:58 -- common/autotest_common.sh@643 -- # es=1 00:26:03.143 22:24:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:03.143 22:24:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:03.143 22:24:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:03.143 22:24:58 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:03.143 22:24:58 -- common/autotest_common.sh@640 -- # local es=0 00:26:03.143 22:24:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:03.143 22:24:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:03.143 22:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:03.143 22:24:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:03.143 22:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:03.143 22:24:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:03.143 22:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.143 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:26:03.143 request: 00:26:03.143 { 00:26:03.143 "name": "NVMe0", 00:26:03.143 "trtype": "tcp", 00:26:03.143 "traddr": "10.0.0.2", 00:26:03.143 "hostaddr": "10.0.0.2", 00:26:03.143 "hostsvcid": "60000", 00:26:03.143 "adrfam": "ipv4", 00:26:03.143 "trsvcid": "4420", 00:26:03.143 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:03.143 "method": "bdev_nvme_attach_controller", 00:26:03.143 "req_id": 1 00:26:03.143 } 00:26:03.143 Got JSON-RPC error response 00:26:03.143 response: 00:26:03.143 { 00:26:03.143 "code": -114, 00:26:03.143 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:03.143 } 00:26:03.143 22:24:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:03.143 22:24:58 -- common/autotest_common.sh@643 -- # es=1 00:26:03.143 22:24:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:03.143 22:24:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:03.143 22:24:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:03.143 22:24:58 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:03.143 22:24:58 -- common/autotest_common.sh@640 -- # local es=0 00:26:03.143 22:24:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:03.143 22:24:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:03.143 22:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:03.143 22:24:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:03.143 22:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:03.143 22:24:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:03.143 22:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.143 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:26:03.143 request: 00:26:03.143 { 00:26:03.143 "name": "NVMe0", 00:26:03.143 "trtype": "tcp", 00:26:03.143 "traddr": "10.0.0.2", 00:26:03.143 "hostaddr": "10.0.0.2", 00:26:03.143 "hostsvcid": "60000", 00:26:03.143 "adrfam": "ipv4", 00:26:03.143 "trsvcid": "4420", 00:26:03.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.143 "multipath": "disable", 00:26:03.143 "method": "bdev_nvme_attach_controller", 00:26:03.143 "req_id": 1 00:26:03.143 } 00:26:03.143 Got JSON-RPC error response 00:26:03.143 response: 00:26:03.143 { 00:26:03.143 "code": -114, 00:26:03.143 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:26:03.143 } 00:26:03.143 22:24:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:03.143 22:24:58 -- common/autotest_common.sh@643 -- # es=1 00:26:03.143 22:24:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:03.143 22:24:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:03.143 22:24:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:03.143 22:24:58 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:03.143 22:24:58 -- common/autotest_common.sh@640 -- # local es=0 00:26:03.143 22:24:58 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:03.143 22:24:58 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:26:03.143 22:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:03.143 22:24:58 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:26:03.143 22:24:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:03.143 22:24:58 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:03.143 22:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.143 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:26:03.143 request: 00:26:03.143 { 00:26:03.143 "name": "NVMe0", 00:26:03.143 "trtype": "tcp", 00:26:03.143 "traddr": "10.0.0.2", 00:26:03.143 "hostaddr": "10.0.0.2", 00:26:03.143 "hostsvcid": "60000", 00:26:03.143 "adrfam": "ipv4", 00:26:03.143 "trsvcid": "4420", 00:26:03.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:03.143 "multipath": "failover", 00:26:03.143 "method": "bdev_nvme_attach_controller", 00:26:03.143 "req_id": 1 00:26:03.143 } 00:26:03.143 Got JSON-RPC error response 00:26:03.143 response: 00:26:03.143 { 00:26:03.143 "code": -114, 00:26:03.143 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:26:03.143 } 00:26:03.143 22:24:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:26:03.143 22:24:58 -- common/autotest_common.sh@643 -- # es=1 00:26:03.143 22:24:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:03.143 22:24:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:26:03.143 22:24:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:03.143 22:24:58 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:03.143 22:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.143 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:26:03.401 00:26:03.401 22:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.401 22:24:58 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:03.401 22:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.401 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:26:03.401 22:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.401 22:24:58 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:03.401 22:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.401 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:26:03.401 00:26:03.401 22:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.401 22:24:58 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:03.401 22:24:58 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:03.401 22:24:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:03.401 22:24:58 -- common/autotest_common.sh@10 -- # set +x 00:26:03.401 22:24:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:03.401 22:24:58 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:03.402 22:24:58 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:04.775 0 00:26:04.775 22:24:59 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:04.775 22:24:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.775 22:24:59 -- common/autotest_common.sh@10 -- # set +x 00:26:04.775 22:24:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.775 22:24:59 -- host/multicontroller.sh@100 -- # killprocess 3667892 00:26:04.775 22:24:59 -- common/autotest_common.sh@926 -- # '[' -z 3667892 ']' 00:26:04.775 22:24:59 -- common/autotest_common.sh@930 -- # kill -0 3667892 00:26:04.775 22:24:59 -- common/autotest_common.sh@931 -- # uname 00:26:04.775 22:24:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:04.775 22:24:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3667892 00:26:04.775 22:24:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:04.775 22:24:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:04.775 22:24:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3667892' 00:26:04.775 killing process with pid 3667892 00:26:04.776 22:24:59 -- common/autotest_common.sh@945 -- # kill 3667892 00:26:04.776 22:24:59 -- common/autotest_common.sh@950 -- # wait 3667892 00:26:04.776 22:24:59 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:04.776 22:24:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.776 22:24:59 -- common/autotest_common.sh@10 -- # set +x 00:26:04.776 22:24:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.776 22:24:59 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:04.776 22:24:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.776 22:24:59 -- common/autotest_common.sh@10 -- # set +x 00:26:04.776 22:24:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.776 22:24:59 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:04.776 22:24:59 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.776 22:24:59 -- common/autotest_common.sh@1597 -- # read -r file 00:26:04.776 22:24:59 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:26:04.776 22:24:59 -- common/autotest_common.sh@1596 -- # sort -u 00:26:04.776 22:24:59 -- common/autotest_common.sh@1598 -- # cat 00:26:04.776 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:04.776 [2024-07-24 22:24:57.105824] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:04.776 [2024-07-24 22:24:57.105874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667892 ] 00:26:04.776 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.776 [2024-07-24 22:24:57.162935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.776 [2024-07-24 22:24:57.201333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.776 [2024-07-24 22:24:58.393757] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 8ef7daae-9793-491b-9c49-25163c15327f already exists 00:26:04.776 [2024-07-24 22:24:58.393787] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:8ef7daae-9793-491b-9c49-25163c15327f alias for bdev NVMe1n1 00:26:04.776 [2024-07-24 22:24:58.393797] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:04.776 Running I/O for 1 seconds... 00:26:04.776 00:26:04.776 Latency(us) 00:26:04.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.776 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:04.776 NVMe0n1 : 1.01 23120.19 90.31 0.00 0.00 5523.78 3063.10 28151.99 00:26:04.776 =================================================================================================================== 00:26:04.776 Total : 23120.19 90.31 0.00 0.00 5523.78 3063.10 28151.99 00:26:04.776 Received shutdown signal, test time was about 1.000000 seconds 00:26:04.776 00:26:04.776 Latency(us) 00:26:04.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.776 =================================================================================================================== 00:26:04.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.776 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:26:04.776 22:24:59 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.776 22:24:59 -- common/autotest_common.sh@1597 -- # read -r file 00:26:04.776 22:24:59 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:04.776 22:24:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:04.776 22:24:59 -- nvmf/common.sh@116 -- # sync 00:26:04.776 22:24:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:04.776 22:24:59 -- nvmf/common.sh@119 -- # set +e 00:26:04.776 22:24:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:04.776 22:24:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:04.776 rmmod nvme_tcp 00:26:04.776 rmmod nvme_fabrics 00:26:04.776 rmmod nvme_keyring 00:26:04.776 22:24:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:04.776 22:24:59 -- nvmf/common.sh@123 -- # set -e 00:26:04.776 22:24:59 -- nvmf/common.sh@124 -- # return 0 00:26:04.776 22:24:59 -- nvmf/common.sh@477 -- # '[' -n 3667835 ']' 00:26:04.776 22:24:59 -- nvmf/common.sh@478 -- # killprocess 3667835 00:26:04.776 22:24:59 -- common/autotest_common.sh@926 -- # '[' -z 3667835 ']' 00:26:04.776 22:24:59 -- common/autotest_common.sh@930 -- # kill -0 3667835 00:26:04.776 22:24:59 -- common/autotest_common.sh@931 -- # uname 00:26:04.776 22:24:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:04.776 22:24:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3667835 00:26:04.776 22:24:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:04.776 22:24:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:04.776 22:24:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3667835' 00:26:04.776 killing process with pid 3667835 00:26:04.776 22:24:59 -- common/autotest_common.sh@945 -- # kill 3667835 00:26:04.776 22:24:59 -- common/autotest_common.sh@950 -- # wait 3667835 00:26:05.034 22:25:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:05.034 22:25:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:05.034 22:25:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:05.034 22:25:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:05.034 22:25:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:05.034 22:25:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.034 22:25:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:05.034 22:25:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.571 22:25:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:07.571 00:26:07.571 real 0m11.463s 00:26:07.571 user 0m16.092s 00:26:07.571 sys 0m4.646s 00:26:07.571 22:25:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.571 22:25:02 -- common/autotest_common.sh@10 -- # set +x 00:26:07.571 ************************************ 00:26:07.571 END TEST nvmf_multicontroller 00:26:07.571 ************************************ 00:26:07.571 22:25:02 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:07.571 22:25:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:07.571 22:25:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:07.571 22:25:02 -- common/autotest_common.sh@10 -- # set +x 00:26:07.571 ************************************ 00:26:07.571 START TEST nvmf_aer 00:26:07.571 ************************************ 00:26:07.571 22:25:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:07.571 * Looking for test storage... 00:26:07.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.571 22:25:02 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.571 22:25:02 -- nvmf/common.sh@7 -- # uname -s 00:26:07.571 22:25:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.571 22:25:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.571 22:25:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.571 22:25:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.571 22:25:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.571 22:25:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.571 22:25:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.571 22:25:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.571 22:25:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.571 22:25:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.571 22:25:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.571 22:25:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.571 22:25:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.571 22:25:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.571 22:25:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.571 22:25:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.571 22:25:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.571 22:25:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.571 22:25:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.571 22:25:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.571 22:25:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.571 22:25:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.571 22:25:02 -- paths/export.sh@5 -- # export PATH 00:26:07.571 22:25:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.571 22:25:02 -- nvmf/common.sh@46 -- # : 0 00:26:07.571 22:25:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:07.571 22:25:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:07.571 22:25:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:07.571 22:25:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.571 22:25:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.571 22:25:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:07.571 22:25:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:07.571 22:25:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:07.571 22:25:02 -- host/aer.sh@11 -- # nvmftestinit 00:26:07.571 22:25:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:07.571 22:25:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.571 22:25:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:07.571 22:25:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:07.571 22:25:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:07.571 22:25:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.571 22:25:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.571 22:25:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.571 22:25:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:07.571 22:25:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:07.571 22:25:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:07.571 22:25:02 -- common/autotest_common.sh@10 -- # set +x 00:26:12.844 22:25:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:12.844 22:25:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:12.844 22:25:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:12.844 22:25:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:12.844 22:25:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:12.844 22:25:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:12.844 22:25:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:12.844 22:25:07 -- nvmf/common.sh@294 -- # net_devs=() 00:26:12.844 22:25:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:12.844 22:25:07 -- nvmf/common.sh@295 -- # e810=() 00:26:12.844 22:25:07 -- nvmf/common.sh@295 -- # local -ga e810 00:26:12.844 22:25:07 -- nvmf/common.sh@296 -- # x722=() 00:26:12.844 22:25:07 -- nvmf/common.sh@296 -- # local -ga x722 00:26:12.844 22:25:07 -- nvmf/common.sh@297 -- # mlx=() 00:26:12.844 22:25:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:12.844 22:25:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.844 22:25:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:12.844 22:25:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:12.844 22:25:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:12.844 22:25:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:12.844 22:25:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:12.844 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:12.844 22:25:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:12.844 22:25:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:12.844 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:12.844 22:25:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:12.844 22:25:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:12.844 22:25:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.844 22:25:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:12.844 22:25:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.844 22:25:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:12.844 Found net devices under 0000:86:00.0: cvl_0_0 00:26:12.844 22:25:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.844 22:25:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:12.844 22:25:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.844 22:25:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:12.844 22:25:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.844 22:25:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:12.844 Found net devices under 0000:86:00.1: cvl_0_1 00:26:12.844 22:25:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.844 22:25:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:12.844 22:25:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:12.844 22:25:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:12.844 22:25:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:12.844 22:25:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.844 22:25:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.844 22:25:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.844 22:25:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:12.845 22:25:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.845 22:25:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.845 22:25:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:12.845 22:25:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.845 22:25:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.845 22:25:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:12.845 22:25:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:12.845 22:25:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.845 22:25:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.845 22:25:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.845 22:25:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.845 22:25:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:12.845 22:25:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.845 22:25:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.845 22:25:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.845 22:25:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:12.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:26:12.845 00:26:12.845 --- 10.0.0.2 ping statistics --- 00:26:12.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.845 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:26:12.845 22:25:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:26:12.845 00:26:12.845 --- 10.0.0.1 ping statistics --- 00:26:12.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.845 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:26:12.845 22:25:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.845 22:25:07 -- nvmf/common.sh@410 -- # return 0 00:26:12.845 22:25:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:12.845 22:25:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.845 22:25:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:12.845 22:25:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:12.845 22:25:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.845 22:25:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:12.845 22:25:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:12.845 22:25:07 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:12.845 22:25:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:12.845 22:25:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:12.845 22:25:07 -- common/autotest_common.sh@10 -- # set +x 00:26:12.845 22:25:07 -- nvmf/common.sh@469 -- # nvmfpid=3671908 00:26:12.845 22:25:07 -- nvmf/common.sh@470 -- # waitforlisten 3671908 00:26:12.845 22:25:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:12.845 22:25:07 -- common/autotest_common.sh@819 -- # '[' -z 3671908 ']' 00:26:12.845 22:25:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.845 22:25:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:12.845 22:25:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.845 22:25:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:12.845 22:25:07 -- common/autotest_common.sh@10 -- # set +x 00:26:12.845 [2024-07-24 22:25:07.774191] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:12.845 [2024-07-24 22:25:07.774236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.845 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.845 [2024-07-24 22:25:07.831842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:12.845 [2024-07-24 22:25:07.870366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:12.845 [2024-07-24 22:25:07.870481] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.845 [2024-07-24 22:25:07.870489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.845 [2024-07-24 22:25:07.870496] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.845 [2024-07-24 22:25:07.870583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.845 [2024-07-24 22:25:07.870671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:12.845 [2024-07-24 22:25:07.870758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:12.845 [2024-07-24 22:25:07.870759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.785 22:25:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:13.785 22:25:08 -- common/autotest_common.sh@852 -- # return 0 00:26:13.785 22:25:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:13.785 22:25:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:13.785 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:26:13.785 22:25:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.785 22:25:08 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:13.785 22:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.785 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:26:13.785 [2024-07-24 22:25:08.613392] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.785 22:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.785 22:25:08 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:13.785 22:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.785 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:26:13.785 Malloc0 00:26:13.785 22:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.785 22:25:08 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:13.785 22:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.785 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:26:13.785 22:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.785 22:25:08 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.785 22:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.785 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:26:13.785 22:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.785 22:25:08 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.785 22:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.785 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:26:13.785 [2024-07-24 22:25:08.665214] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.785 22:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.785 22:25:08 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:13.785 22:25:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:13.785 22:25:08 -- common/autotest_common.sh@10 -- # set +x 00:26:13.785 [2024-07-24 22:25:08.673002] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:13.785 [ 00:26:13.785 { 00:26:13.785 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:13.785 "subtype": "Discovery", 00:26:13.785 "listen_addresses": [], 00:26:13.785 "allow_any_host": true, 00:26:13.785 "hosts": [] 00:26:13.785 }, 00:26:13.785 { 00:26:13.785 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:13.785 "subtype": "NVMe", 00:26:13.785 "listen_addresses": [ 00:26:13.785 { 00:26:13.785 "transport": "TCP", 00:26:13.785 "trtype": "TCP", 00:26:13.785 "adrfam": "IPv4", 00:26:13.785 "traddr": "10.0.0.2", 00:26:13.785 "trsvcid": "4420" 00:26:13.785 } 00:26:13.785 ], 00:26:13.785 "allow_any_host": true, 00:26:13.785 "hosts": [], 00:26:13.785 "serial_number": "SPDK00000000000001", 00:26:13.785 "model_number": "SPDK bdev Controller", 00:26:13.785 "max_namespaces": 2, 00:26:13.785 "min_cntlid": 1, 00:26:13.785 "max_cntlid": 65519, 00:26:13.785 "namespaces": [ 00:26:13.785 { 00:26:13.785 "nsid": 1, 00:26:13.785 "bdev_name": "Malloc0", 00:26:13.785 "name": "Malloc0", 00:26:13.785 "nguid": "038BE811855842ECBF250285ACA9A9BD", 00:26:13.785 "uuid": "038be811-8558-42ec-bf25-0285aca9a9bd" 00:26:13.785 } 00:26:13.785 ] 00:26:13.785 } 00:26:13.785 ] 00:26:13.785 22:25:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.785 22:25:08 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:13.785 22:25:08 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:13.785 22:25:08 -- host/aer.sh@33 -- # aerpid=3672031 00:26:13.785 22:25:08 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:13.785 22:25:08 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:13.785 22:25:08 -- common/autotest_common.sh@1244 -- # local i=0 00:26:13.785 22:25:08 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:13.785 22:25:08 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:26:13.785 22:25:08 -- common/autotest_common.sh@1247 -- # i=1 00:26:13.785 22:25:08 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:13.785 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.785 22:25:08 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:13.785 22:25:08 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:26:13.785 22:25:08 -- common/autotest_common.sh@1247 -- # i=2 00:26:13.785 22:25:08 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:13.785 22:25:08 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:13.785 22:25:08 -- common/autotest_common.sh@1246 -- # '[' 2 -lt 200 ']' 00:26:13.785 22:25:08 -- common/autotest_common.sh@1247 -- # i=3 00:26:13.785 22:25:08 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:26:14.045 22:25:09 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:14.045 22:25:09 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:14.045 22:25:09 -- common/autotest_common.sh@1255 -- # return 0 00:26:14.045 22:25:09 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:14.045 22:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.045 22:25:09 -- common/autotest_common.sh@10 -- # set +x 00:26:14.045 Malloc1 00:26:14.045 22:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.045 22:25:09 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:14.045 22:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.045 22:25:09 -- common/autotest_common.sh@10 -- # set +x 00:26:14.045 22:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.045 22:25:09 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:14.045 22:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.045 22:25:09 -- common/autotest_common.sh@10 -- # set +x 00:26:14.045 [ 00:26:14.045 { 00:26:14.045 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:14.045 "subtype": "Discovery", 00:26:14.045 "listen_addresses": [], 00:26:14.045 "allow_any_host": true, 00:26:14.045 "hosts": [] 00:26:14.046 }, 00:26:14.046 { 00:26:14.046 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.046 "subtype": "NVMe", 00:26:14.046 "listen_addresses": [ 00:26:14.046 { 00:26:14.046 "transport": "TCP", 00:26:14.046 "trtype": "TCP", 00:26:14.046 "adrfam": "IPv4", 00:26:14.046 "traddr": "10.0.0.2", 00:26:14.046 "trsvcid": "4420" 00:26:14.046 } 00:26:14.046 ], 00:26:14.046 "allow_any_host": true, 00:26:14.046 "hosts": [], 00:26:14.046 "serial_number": "SPDK00000000000001", 00:26:14.046 "model_number": "SPDK bdev Controller", 00:26:14.046 "max_namespaces": 2, 00:26:14.046 "min_cntlid": 1, 00:26:14.046 "max_cntlid": 65519, 00:26:14.046 "namespaces": [ 00:26:14.046 { 00:26:14.046 "nsid": 1, 00:26:14.046 "bdev_name": "Malloc0", 00:26:14.046 "name": "Malloc0", 00:26:14.046 "nguid": "038BE811855842ECBF250285ACA9A9BD", 00:26:14.046 "uuid": "038be811-8558-42ec-bf25-0285aca9a9bd" 00:26:14.046 }, 00:26:14.046 { 00:26:14.046 "nsid": 2, 00:26:14.046 "bdev_name": "Malloc1", 00:26:14.046 Asynchronous Event Request test 00:26:14.046 Attaching to 10.0.0.2 00:26:14.046 Attached to 10.0.0.2 00:26:14.046 Registering asynchronous event callbacks... 00:26:14.046 Starting namespace attribute notice tests for all controllers... 00:26:14.046 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:14.046 aer_cb - Changed Namespace 00:26:14.046 Cleaning up... 00:26:14.046 "name": "Malloc1", 00:26:14.046 "nguid": "694CDBC4F0C24C16BB574F688A3C2835", 00:26:14.046 "uuid": "694cdbc4-f0c2-4c16-bb57-4f688a3c2835" 00:26:14.046 } 00:26:14.046 ] 00:26:14.046 } 00:26:14.046 ] 00:26:14.046 22:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.046 22:25:09 -- host/aer.sh@43 -- # wait 3672031 00:26:14.046 22:25:09 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:14.046 22:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.046 22:25:09 -- common/autotest_common.sh@10 -- # set +x 00:26:14.046 22:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.046 22:25:09 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:14.046 22:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.046 22:25:09 -- common/autotest_common.sh@10 -- # set +x 00:26:14.046 22:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.046 22:25:09 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.046 22:25:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.046 22:25:09 -- common/autotest_common.sh@10 -- # set +x 00:26:14.046 22:25:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.046 22:25:09 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:14.046 22:25:09 -- host/aer.sh@51 -- # nvmftestfini 00:26:14.046 22:25:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:14.046 22:25:09 -- nvmf/common.sh@116 -- # sync 00:26:14.046 22:25:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:14.046 22:25:09 -- nvmf/common.sh@119 -- # set +e 00:26:14.046 22:25:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:14.046 22:25:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:14.046 rmmod nvme_tcp 00:26:14.046 rmmod nvme_fabrics 00:26:14.046 rmmod nvme_keyring 00:26:14.306 22:25:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:14.306 22:25:09 -- nvmf/common.sh@123 -- # set -e 00:26:14.306 22:25:09 -- nvmf/common.sh@124 -- # return 0 00:26:14.306 22:25:09 -- nvmf/common.sh@477 -- # '[' -n 3671908 ']' 00:26:14.306 22:25:09 -- nvmf/common.sh@478 -- # killprocess 3671908 00:26:14.306 22:25:09 -- common/autotest_common.sh@926 -- # '[' -z 3671908 ']' 00:26:14.306 22:25:09 -- common/autotest_common.sh@930 -- # kill -0 3671908 00:26:14.306 22:25:09 -- common/autotest_common.sh@931 -- # uname 00:26:14.306 22:25:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:14.306 22:25:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3671908 00:26:14.306 22:25:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:14.306 22:25:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:14.306 22:25:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3671908' 00:26:14.306 killing process with pid 3671908 00:26:14.306 22:25:09 -- common/autotest_common.sh@945 -- # kill 3671908 00:26:14.306 [2024-07-24 22:25:09.235324] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:14.306 22:25:09 -- common/autotest_common.sh@950 -- # wait 3671908 00:26:14.306 22:25:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:14.306 22:25:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:14.306 22:25:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:14.306 22:25:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:14.306 22:25:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:14.306 22:25:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.306 22:25:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:14.306 22:25:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.843 22:25:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:16.843 00:26:16.843 real 0m9.246s 00:26:16.843 user 0m7.663s 00:26:16.843 sys 0m4.460s 00:26:16.843 22:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:16.843 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:26:16.843 ************************************ 00:26:16.843 END TEST nvmf_aer 00:26:16.843 ************************************ 00:26:16.843 22:25:11 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:16.843 22:25:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:16.843 22:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:16.843 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:26:16.843 ************************************ 00:26:16.843 START TEST nvmf_async_init 00:26:16.843 ************************************ 00:26:16.843 22:25:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:16.843 * Looking for test storage... 00:26:16.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:16.843 22:25:11 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:16.843 22:25:11 -- nvmf/common.sh@7 -- # uname -s 00:26:16.843 22:25:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:16.843 22:25:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:16.843 22:25:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:16.843 22:25:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:16.843 22:25:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:16.843 22:25:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:16.843 22:25:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:16.843 22:25:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:16.843 22:25:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:16.843 22:25:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:16.843 22:25:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:16.843 22:25:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:16.843 22:25:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:16.843 22:25:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:16.843 22:25:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:16.843 22:25:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:16.843 22:25:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:16.843 22:25:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:16.843 22:25:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:16.843 22:25:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.843 22:25:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.843 22:25:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.843 22:25:11 -- paths/export.sh@5 -- # export PATH 00:26:16.843 22:25:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:16.843 22:25:11 -- nvmf/common.sh@46 -- # : 0 00:26:16.843 22:25:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:16.843 22:25:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:16.843 22:25:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:16.843 22:25:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:16.843 22:25:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:16.843 22:25:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:16.843 22:25:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:16.843 22:25:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:16.843 22:25:11 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:16.843 22:25:11 -- host/async_init.sh@14 -- # null_block_size=512 00:26:16.843 22:25:11 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:16.843 22:25:11 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:16.843 22:25:11 -- host/async_init.sh@20 -- # uuidgen 00:26:16.843 22:25:11 -- host/async_init.sh@20 -- # tr -d - 00:26:16.843 22:25:11 -- host/async_init.sh@20 -- # nguid=d868e47920b7493abc44cff0f79ea570 00:26:16.843 22:25:11 -- host/async_init.sh@22 -- # nvmftestinit 00:26:16.843 22:25:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:16.843 22:25:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:16.843 22:25:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:16.843 22:25:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:16.843 22:25:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:16.843 22:25:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.843 22:25:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:16.843 22:25:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:16.843 22:25:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:16.843 22:25:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:16.843 22:25:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:16.843 22:25:11 -- common/autotest_common.sh@10 -- # set +x 00:26:22.114 22:25:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:22.114 22:25:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:22.114 22:25:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:22.114 22:25:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:22.114 22:25:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:22.114 22:25:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:22.114 22:25:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:22.114 22:25:16 -- nvmf/common.sh@294 -- # net_devs=() 00:26:22.114 22:25:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:22.114 22:25:16 -- nvmf/common.sh@295 -- # e810=() 00:26:22.114 22:25:16 -- nvmf/common.sh@295 -- # local -ga e810 00:26:22.114 22:25:16 -- nvmf/common.sh@296 -- # x722=() 00:26:22.114 22:25:16 -- nvmf/common.sh@296 -- # local -ga x722 00:26:22.114 22:25:16 -- nvmf/common.sh@297 -- # mlx=() 00:26:22.114 22:25:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:22.114 22:25:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:22.114 22:25:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:22.114 22:25:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:22.114 22:25:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:22.114 22:25:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:22.115 22:25:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:22.115 22:25:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:22.115 22:25:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:22.115 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:22.115 22:25:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:22.115 22:25:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:22.115 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:22.115 22:25:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:22.115 22:25:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:22.115 22:25:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.115 22:25:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:22.115 22:25:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.115 22:25:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:22.115 Found net devices under 0000:86:00.0: cvl_0_0 00:26:22.115 22:25:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.115 22:25:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:22.115 22:25:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:22.115 22:25:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:22.115 22:25:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:22.115 22:25:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:22.115 Found net devices under 0000:86:00.1: cvl_0_1 00:26:22.115 22:25:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:22.115 22:25:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:22.115 22:25:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:22.115 22:25:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:22.115 22:25:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:22.115 22:25:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:22.115 22:25:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:22.115 22:25:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:22.115 22:25:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:22.115 22:25:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:22.115 22:25:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:22.115 22:25:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:22.115 22:25:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:22.115 22:25:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:22.115 22:25:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:22.115 22:25:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:22.115 22:25:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:22.115 22:25:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:22.115 22:25:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:22.115 22:25:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:22.115 22:25:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:22.115 22:25:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:22.115 22:25:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:22.115 22:25:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:22.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:22.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:26:22.115 00:26:22.115 --- 10.0.0.2 ping statistics --- 00:26:22.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.115 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:26:22.115 22:25:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:22.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:22.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:26:22.115 00:26:22.115 --- 10.0.0.1 ping statistics --- 00:26:22.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:22.115 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:26:22.115 22:25:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:22.115 22:25:16 -- nvmf/common.sh@410 -- # return 0 00:26:22.115 22:25:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:22.115 22:25:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:22.115 22:25:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:22.115 22:25:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:22.115 22:25:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:22.115 22:25:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:22.115 22:25:16 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:22.115 22:25:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:22.115 22:25:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:22.115 22:25:16 -- common/autotest_common.sh@10 -- # set +x 00:26:22.115 22:25:16 -- nvmf/common.sh@469 -- # nvmfpid=3675483 00:26:22.115 22:25:16 -- nvmf/common.sh@470 -- # waitforlisten 3675483 00:26:22.115 22:25:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:22.115 22:25:16 -- common/autotest_common.sh@819 -- # '[' -z 3675483 ']' 00:26:22.115 22:25:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.115 22:25:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:22.115 22:25:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.115 22:25:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:22.115 22:25:16 -- common/autotest_common.sh@10 -- # set +x 00:26:22.115 [2024-07-24 22:25:16.962333] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:22.115 [2024-07-24 22:25:16.962374] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.115 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.115 [2024-07-24 22:25:17.018523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.115 [2024-07-24 22:25:17.057240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:22.115 [2024-07-24 22:25:17.057347] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.115 [2024-07-24 22:25:17.057355] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.115 [2024-07-24 22:25:17.057361] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.115 [2024-07-24 22:25:17.057378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.682 22:25:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:22.682 22:25:17 -- common/autotest_common.sh@852 -- # return 0 00:26:22.682 22:25:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:22.682 22:25:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:22.682 22:25:17 -- common/autotest_common.sh@10 -- # set +x 00:26:22.682 22:25:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.682 22:25:17 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:22.682 22:25:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.682 22:25:17 -- common/autotest_common.sh@10 -- # set +x 00:26:22.682 [2024-07-24 22:25:17.808172] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.682 22:25:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.682 22:25:17 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:22.682 22:25:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.682 22:25:17 -- common/autotest_common.sh@10 -- # set +x 00:26:22.940 null0 00:26:22.940 22:25:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.940 22:25:17 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:22.940 22:25:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.940 22:25:17 -- common/autotest_common.sh@10 -- # set +x 00:26:22.940 22:25:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.940 22:25:17 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:22.940 22:25:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.940 22:25:17 -- common/autotest_common.sh@10 -- # set +x 00:26:22.940 22:25:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.940 22:25:17 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d868e47920b7493abc44cff0f79ea570 00:26:22.940 22:25:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.940 22:25:17 -- common/autotest_common.sh@10 -- # set +x 00:26:22.940 22:25:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.940 22:25:17 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.940 22:25:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.940 22:25:17 -- common/autotest_common.sh@10 -- # set +x 00:26:22.940 [2024-07-24 22:25:17.848366] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.940 22:25:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:22.940 22:25:17 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:22.940 22:25:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:22.940 22:25:17 -- common/autotest_common.sh@10 -- # set +x 00:26:23.198 nvme0n1 00:26:23.198 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.198 22:25:18 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.198 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.198 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.198 [ 00:26:23.198 { 00:26:23.198 "name": "nvme0n1", 00:26:23.198 "aliases": [ 00:26:23.198 "d868e479-20b7-493a-bc44-cff0f79ea570" 00:26:23.198 ], 00:26:23.198 "product_name": "NVMe disk", 00:26:23.198 "block_size": 512, 00:26:23.198 "num_blocks": 2097152, 00:26:23.198 "uuid": "d868e479-20b7-493a-bc44-cff0f79ea570", 00:26:23.198 "assigned_rate_limits": { 00:26:23.198 "rw_ios_per_sec": 0, 00:26:23.198 "rw_mbytes_per_sec": 0, 00:26:23.198 "r_mbytes_per_sec": 0, 00:26:23.198 "w_mbytes_per_sec": 0 00:26:23.198 }, 00:26:23.198 "claimed": false, 00:26:23.198 "zoned": false, 00:26:23.198 "supported_io_types": { 00:26:23.198 "read": true, 00:26:23.198 "write": true, 00:26:23.198 "unmap": false, 00:26:23.198 "write_zeroes": true, 00:26:23.198 "flush": true, 00:26:23.198 "reset": true, 00:26:23.198 "compare": true, 00:26:23.198 "compare_and_write": true, 00:26:23.198 "abort": true, 00:26:23.198 "nvme_admin": true, 00:26:23.198 "nvme_io": true 00:26:23.198 }, 00:26:23.198 "driver_specific": { 00:26:23.198 "nvme": [ 00:26:23.198 { 00:26:23.198 "trid": { 00:26:23.198 "trtype": "TCP", 00:26:23.198 "adrfam": "IPv4", 00:26:23.198 "traddr": "10.0.0.2", 00:26:23.198 "trsvcid": "4420", 00:26:23.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:23.198 }, 00:26:23.198 "ctrlr_data": { 00:26:23.198 "cntlid": 1, 00:26:23.198 "vendor_id": "0x8086", 00:26:23.198 "model_number": "SPDK bdev Controller", 00:26:23.198 "serial_number": "00000000000000000000", 00:26:23.198 "firmware_revision": "24.01.1", 00:26:23.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.198 "oacs": { 00:26:23.198 "security": 0, 00:26:23.198 "format": 0, 00:26:23.198 "firmware": 0, 00:26:23.198 "ns_manage": 0 00:26:23.198 }, 00:26:23.198 "multi_ctrlr": true, 00:26:23.198 "ana_reporting": false 00:26:23.198 }, 00:26:23.198 "vs": { 00:26:23.198 "nvme_version": "1.3" 00:26:23.198 }, 00:26:23.198 "ns_data": { 00:26:23.198 "id": 1, 00:26:23.198 "can_share": true 00:26:23.198 } 00:26:23.198 } 00:26:23.198 ], 00:26:23.198 "mp_policy": "active_passive" 00:26:23.198 } 00:26:23.198 } 00:26:23.198 ] 00:26:23.198 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.198 22:25:18 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:23.198 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.198 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.198 [2024-07-24 22:25:18.100937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.198 [2024-07-24 22:25:18.100991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14b8c60 (9): Bad file descriptor 00:26:23.198 [2024-07-24 22:25:18.233121] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:23.198 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.198 22:25:18 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.198 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.198 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.198 [ 00:26:23.198 { 00:26:23.198 "name": "nvme0n1", 00:26:23.198 "aliases": [ 00:26:23.198 "d868e479-20b7-493a-bc44-cff0f79ea570" 00:26:23.198 ], 00:26:23.198 "product_name": "NVMe disk", 00:26:23.198 "block_size": 512, 00:26:23.198 "num_blocks": 2097152, 00:26:23.198 "uuid": "d868e479-20b7-493a-bc44-cff0f79ea570", 00:26:23.198 "assigned_rate_limits": { 00:26:23.198 "rw_ios_per_sec": 0, 00:26:23.198 "rw_mbytes_per_sec": 0, 00:26:23.198 "r_mbytes_per_sec": 0, 00:26:23.198 "w_mbytes_per_sec": 0 00:26:23.198 }, 00:26:23.198 "claimed": false, 00:26:23.198 "zoned": false, 00:26:23.198 "supported_io_types": { 00:26:23.198 "read": true, 00:26:23.198 "write": true, 00:26:23.198 "unmap": false, 00:26:23.198 "write_zeroes": true, 00:26:23.198 "flush": true, 00:26:23.198 "reset": true, 00:26:23.198 "compare": true, 00:26:23.198 "compare_and_write": true, 00:26:23.198 "abort": true, 00:26:23.198 "nvme_admin": true, 00:26:23.198 "nvme_io": true 00:26:23.198 }, 00:26:23.198 "driver_specific": { 00:26:23.198 "nvme": [ 00:26:23.198 { 00:26:23.198 "trid": { 00:26:23.198 "trtype": "TCP", 00:26:23.198 "adrfam": "IPv4", 00:26:23.198 "traddr": "10.0.0.2", 00:26:23.198 "trsvcid": "4420", 00:26:23.198 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:23.198 }, 00:26:23.198 "ctrlr_data": { 00:26:23.198 "cntlid": 2, 00:26:23.198 "vendor_id": "0x8086", 00:26:23.198 "model_number": "SPDK bdev Controller", 00:26:23.198 "serial_number": "00000000000000000000", 00:26:23.198 "firmware_revision": "24.01.1", 00:26:23.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.198 "oacs": { 00:26:23.198 "security": 0, 00:26:23.198 "format": 0, 00:26:23.198 "firmware": 0, 00:26:23.198 "ns_manage": 0 00:26:23.198 }, 00:26:23.198 "multi_ctrlr": true, 00:26:23.198 "ana_reporting": false 00:26:23.198 }, 00:26:23.198 "vs": { 00:26:23.198 "nvme_version": "1.3" 00:26:23.198 }, 00:26:23.198 "ns_data": { 00:26:23.198 "id": 1, 00:26:23.198 "can_share": true 00:26:23.198 } 00:26:23.198 } 00:26:23.198 ], 00:26:23.198 "mp_policy": "active_passive" 00:26:23.198 } 00:26:23.198 } 00:26:23.198 ] 00:26:23.198 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.198 22:25:18 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.198 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.198 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.198 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.198 22:25:18 -- host/async_init.sh@53 -- # mktemp 00:26:23.198 22:25:18 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.C7WTsAjq0m 00:26:23.199 22:25:18 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:23.199 22:25:18 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.C7WTsAjq0m 00:26:23.199 22:25:18 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:23.199 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.199 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.199 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.199 22:25:18 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:23.199 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.199 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.199 [2024-07-24 22:25:18.289519] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:23.199 [2024-07-24 22:25:18.289615] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:23.199 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.199 22:25:18 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C7WTsAjq0m 00:26:23.199 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.199 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.199 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.199 22:25:18 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.C7WTsAjq0m 00:26:23.199 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.199 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.199 [2024-07-24 22:25:18.305553] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:23.488 nvme0n1 00:26:23.488 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.488 22:25:18 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.488 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.488 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.488 [ 00:26:23.488 { 00:26:23.488 "name": "nvme0n1", 00:26:23.488 "aliases": [ 00:26:23.488 "d868e479-20b7-493a-bc44-cff0f79ea570" 00:26:23.488 ], 00:26:23.489 "product_name": "NVMe disk", 00:26:23.489 "block_size": 512, 00:26:23.489 "num_blocks": 2097152, 00:26:23.489 "uuid": "d868e479-20b7-493a-bc44-cff0f79ea570", 00:26:23.489 "assigned_rate_limits": { 00:26:23.489 "rw_ios_per_sec": 0, 00:26:23.489 "rw_mbytes_per_sec": 0, 00:26:23.489 "r_mbytes_per_sec": 0, 00:26:23.489 "w_mbytes_per_sec": 0 00:26:23.489 }, 00:26:23.489 "claimed": false, 00:26:23.489 "zoned": false, 00:26:23.489 "supported_io_types": { 00:26:23.489 "read": true, 00:26:23.489 "write": true, 00:26:23.489 "unmap": false, 00:26:23.489 "write_zeroes": true, 00:26:23.489 "flush": true, 00:26:23.489 "reset": true, 00:26:23.489 "compare": true, 00:26:23.489 "compare_and_write": true, 00:26:23.489 "abort": true, 00:26:23.489 "nvme_admin": true, 00:26:23.489 "nvme_io": true 00:26:23.489 }, 00:26:23.489 "driver_specific": { 00:26:23.489 "nvme": [ 00:26:23.489 { 00:26:23.489 "trid": { 00:26:23.489 "trtype": "TCP", 00:26:23.489 "adrfam": "IPv4", 00:26:23.489 "traddr": "10.0.0.2", 00:26:23.489 "trsvcid": "4421", 00:26:23.489 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:26:23.489 }, 00:26:23.489 "ctrlr_data": { 00:26:23.489 "cntlid": 3, 00:26:23.489 "vendor_id": "0x8086", 00:26:23.489 "model_number": "SPDK bdev Controller", 00:26:23.489 "serial_number": "00000000000000000000", 00:26:23.489 "firmware_revision": "24.01.1", 00:26:23.489 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.489 "oacs": { 00:26:23.489 "security": 0, 00:26:23.489 "format": 0, 00:26:23.489 "firmware": 0, 00:26:23.489 "ns_manage": 0 00:26:23.489 }, 00:26:23.489 "multi_ctrlr": true, 00:26:23.489 "ana_reporting": false 00:26:23.489 }, 00:26:23.489 "vs": { 00:26:23.489 "nvme_version": "1.3" 00:26:23.489 }, 00:26:23.489 "ns_data": { 00:26:23.489 "id": 1, 00:26:23.489 "can_share": true 00:26:23.489 } 00:26:23.489 } 00:26:23.489 ], 00:26:23.489 "mp_policy": "active_passive" 00:26:23.489 } 00:26:23.489 } 00:26:23.489 ] 00:26:23.489 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.489 22:25:18 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.489 22:25:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.489 22:25:18 -- common/autotest_common.sh@10 -- # set +x 00:26:23.489 22:25:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.489 22:25:18 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.C7WTsAjq0m 00:26:23.489 22:25:18 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:23.489 22:25:18 -- host/async_init.sh@78 -- # nvmftestfini 00:26:23.489 22:25:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:23.489 22:25:18 -- nvmf/common.sh@116 -- # sync 00:26:23.489 22:25:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:23.489 22:25:18 -- nvmf/common.sh@119 -- # set +e 00:26:23.489 22:25:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:23.489 22:25:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:23.489 rmmod nvme_tcp 00:26:23.489 rmmod nvme_fabrics 00:26:23.489 rmmod nvme_keyring 00:26:23.489 22:25:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:23.489 22:25:18 -- nvmf/common.sh@123 -- # set -e 00:26:23.489 22:25:18 -- nvmf/common.sh@124 -- # return 0 00:26:23.489 22:25:18 -- nvmf/common.sh@477 -- # '[' -n 3675483 ']' 00:26:23.489 22:25:18 -- nvmf/common.sh@478 -- # killprocess 3675483 00:26:23.489 22:25:18 -- common/autotest_common.sh@926 -- # '[' -z 3675483 ']' 00:26:23.489 22:25:18 -- common/autotest_common.sh@930 -- # kill -0 3675483 00:26:23.489 22:25:18 -- common/autotest_common.sh@931 -- # uname 00:26:23.489 22:25:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:23.489 22:25:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3675483 00:26:23.489 22:25:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:23.489 22:25:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:23.489 22:25:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3675483' 00:26:23.489 killing process with pid 3675483 00:26:23.489 22:25:18 -- common/autotest_common.sh@945 -- # kill 3675483 00:26:23.489 22:25:18 -- common/autotest_common.sh@950 -- # wait 3675483 00:26:23.747 22:25:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:23.747 22:25:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:23.747 22:25:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:23.747 22:25:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.747 22:25:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:23.747 22:25:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.747 22:25:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.747 22:25:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.649 22:25:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:25.649 00:26:25.649 real 0m9.208s 00:26:25.649 user 0m3.357s 00:26:25.649 sys 0m4.361s 00:26:25.649 22:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.649 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:25.649 ************************************ 00:26:25.649 END TEST nvmf_async_init 00:26:25.649 ************************************ 00:26:25.649 22:25:20 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:25.649 22:25:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:25.649 22:25:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.649 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:25.649 ************************************ 00:26:25.649 START TEST dma 00:26:25.649 ************************************ 00:26:25.649 22:25:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:25.910 * Looking for test storage... 00:26:25.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.910 22:25:20 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.910 22:25:20 -- nvmf/common.sh@7 -- # uname -s 00:26:25.910 22:25:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.910 22:25:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.910 22:25:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.910 22:25:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.910 22:25:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.910 22:25:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.910 22:25:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.910 22:25:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.910 22:25:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.910 22:25:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.910 22:25:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:25.910 22:25:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:25.910 22:25:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.910 22:25:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.910 22:25:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.910 22:25:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.910 22:25:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.910 22:25:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.910 22:25:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.910 22:25:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.910 22:25:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.910 22:25:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.910 22:25:20 -- paths/export.sh@5 -- # export PATH 00:26:25.910 22:25:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.910 22:25:20 -- nvmf/common.sh@46 -- # : 0 00:26:25.910 22:25:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:25.910 22:25:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:25.910 22:25:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:25.910 22:25:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.910 22:25:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.910 22:25:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:25.910 22:25:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:25.910 22:25:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:25.910 22:25:20 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:25.910 22:25:20 -- host/dma.sh@13 -- # exit 0 00:26:25.910 00:26:25.910 real 0m0.109s 00:26:25.910 user 0m0.051s 00:26:25.910 sys 0m0.066s 00:26:25.910 22:25:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.910 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:25.910 ************************************ 00:26:25.910 END TEST dma 00:26:25.910 ************************************ 00:26:25.910 22:25:20 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:25.910 22:25:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:25.910 22:25:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.910 22:25:20 -- common/autotest_common.sh@10 -- # set +x 00:26:25.910 ************************************ 00:26:25.910 START TEST nvmf_identify 00:26:25.910 ************************************ 00:26:25.910 22:25:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:25.910 * Looking for test storage... 00:26:25.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.910 22:25:20 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.910 22:25:20 -- nvmf/common.sh@7 -- # uname -s 00:26:25.910 22:25:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.910 22:25:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.910 22:25:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.910 22:25:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.910 22:25:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.910 22:25:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.910 22:25:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.910 22:25:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.910 22:25:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.910 22:25:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.910 22:25:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:25.910 22:25:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:25.910 22:25:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.910 22:25:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.910 22:25:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.910 22:25:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.910 22:25:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.910 22:25:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.910 22:25:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.910 22:25:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.910 22:25:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.910 22:25:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.910 22:25:21 -- paths/export.sh@5 -- # export PATH 00:26:25.910 22:25:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.910 22:25:21 -- nvmf/common.sh@46 -- # : 0 00:26:25.910 22:25:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:25.910 22:25:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:25.910 22:25:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:25.910 22:25:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.910 22:25:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.910 22:25:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:25.910 22:25:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:25.910 22:25:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:25.910 22:25:21 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:25.910 22:25:21 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:25.910 22:25:21 -- host/identify.sh@14 -- # nvmftestinit 00:26:25.910 22:25:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:25.910 22:25:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.910 22:25:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:25.910 22:25:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:25.910 22:25:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:25.910 22:25:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.910 22:25:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.910 22:25:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.910 22:25:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:25.910 22:25:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:25.910 22:25:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:25.910 22:25:21 -- common/autotest_common.sh@10 -- # set +x 00:26:32.473 22:25:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:32.473 22:25:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:32.473 22:25:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:32.473 22:25:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:32.473 22:25:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:32.473 22:25:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:32.473 22:25:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:32.473 22:25:26 -- nvmf/common.sh@294 -- # net_devs=() 00:26:32.473 22:25:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:32.473 22:25:26 -- nvmf/common.sh@295 -- # e810=() 00:26:32.473 22:25:26 -- nvmf/common.sh@295 -- # local -ga e810 00:26:32.473 22:25:26 -- nvmf/common.sh@296 -- # x722=() 00:26:32.473 22:25:26 -- nvmf/common.sh@296 -- # local -ga x722 00:26:32.473 22:25:26 -- nvmf/common.sh@297 -- # mlx=() 00:26:32.473 22:25:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:32.473 22:25:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.473 22:25:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:32.473 22:25:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:32.473 22:25:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:32.473 22:25:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:32.473 22:25:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:32.473 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:32.473 22:25:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:32.473 22:25:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:32.473 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:32.473 22:25:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:32.473 22:25:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:32.473 22:25:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:32.473 22:25:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.473 22:25:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:32.473 22:25:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.473 22:25:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:32.474 Found net devices under 0000:86:00.0: cvl_0_0 00:26:32.474 22:25:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.474 22:25:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:32.474 22:25:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.474 22:25:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:32.474 22:25:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.474 22:25:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:32.474 Found net devices under 0000:86:00.1: cvl_0_1 00:26:32.474 22:25:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.474 22:25:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:32.474 22:25:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:32.474 22:25:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:32.474 22:25:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:32.474 22:25:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:32.474 22:25:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.474 22:25:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.474 22:25:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.474 22:25:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:32.474 22:25:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.474 22:25:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.474 22:25:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:32.474 22:25:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.474 22:25:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.474 22:25:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:32.474 22:25:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:32.474 22:25:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.474 22:25:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.474 22:25:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.474 22:25:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.474 22:25:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:32.474 22:25:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.474 22:25:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.474 22:25:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.474 22:25:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:32.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:26:32.474 00:26:32.474 --- 10.0.0.2 ping statistics --- 00:26:32.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.474 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:26:32.474 22:25:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:26:32.474 00:26:32.474 --- 10.0.0.1 ping statistics --- 00:26:32.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.474 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:26:32.474 22:25:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.474 22:25:26 -- nvmf/common.sh@410 -- # return 0 00:26:32.474 22:25:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:32.474 22:25:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.474 22:25:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:32.474 22:25:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:32.474 22:25:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.474 22:25:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:32.474 22:25:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:32.474 22:25:26 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:32.474 22:25:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:32.474 22:25:26 -- common/autotest_common.sh@10 -- # set +x 00:26:32.474 22:25:26 -- host/identify.sh@19 -- # nvmfpid=3679321 00:26:32.474 22:25:26 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:32.474 22:25:26 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:32.474 22:25:26 -- host/identify.sh@23 -- # waitforlisten 3679321 00:26:32.474 22:25:26 -- common/autotest_common.sh@819 -- # '[' -z 3679321 ']' 00:26:32.474 22:25:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.474 22:25:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:32.474 22:25:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.474 22:25:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:32.474 22:25:26 -- common/autotest_common.sh@10 -- # set +x 00:26:32.474 [2024-07-24 22:25:26.722985] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:32.474 [2024-07-24 22:25:26.723027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.474 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.474 [2024-07-24 22:25:26.780800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.474 [2024-07-24 22:25:26.821437] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:32.474 [2024-07-24 22:25:26.821550] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.474 [2024-07-24 22:25:26.821559] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.474 [2024-07-24 22:25:26.821565] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.474 [2024-07-24 22:25:26.821609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.474 [2024-07-24 22:25:26.821703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.474 [2024-07-24 22:25:26.821770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.474 [2024-07-24 22:25:26.821772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.474 22:25:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:32.474 22:25:27 -- common/autotest_common.sh@852 -- # return 0 00:26:32.474 22:25:27 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:32.474 22:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:32.474 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:26:32.474 [2024-07-24 22:25:27.539294] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.474 22:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:32.474 22:25:27 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:32.474 22:25:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:32.474 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:26:32.474 22:25:27 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.474 22:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:32.474 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:26:32.474 Malloc0 00:26:32.474 22:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:32.474 22:25:27 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:32.474 22:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:32.474 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 22:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:32.734 22:25:27 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:32.734 22:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:32.734 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 22:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:32.734 22:25:27 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.734 22:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:32.734 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 [2024-07-24 22:25:27.627093] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.734 22:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:32.734 22:25:27 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:32.734 22:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:32.734 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 22:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:32.734 22:25:27 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:32.734 22:25:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:32.734 22:25:27 -- common/autotest_common.sh@10 -- # set +x 00:26:32.734 [2024-07-24 22:25:27.642927] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:32.734 [ 00:26:32.734 { 00:26:32.734 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:32.734 "subtype": "Discovery", 00:26:32.734 "listen_addresses": [ 00:26:32.734 { 00:26:32.734 "transport": "TCP", 00:26:32.734 "trtype": "TCP", 00:26:32.734 "adrfam": "IPv4", 00:26:32.734 "traddr": "10.0.0.2", 00:26:32.734 "trsvcid": "4420" 00:26:32.734 } 00:26:32.734 ], 00:26:32.734 "allow_any_host": true, 00:26:32.734 "hosts": [] 00:26:32.734 }, 00:26:32.734 { 00:26:32.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.734 "subtype": "NVMe", 00:26:32.734 "listen_addresses": [ 00:26:32.734 { 00:26:32.734 "transport": "TCP", 00:26:32.734 "trtype": "TCP", 00:26:32.734 "adrfam": "IPv4", 00:26:32.734 "traddr": "10.0.0.2", 00:26:32.734 "trsvcid": "4420" 00:26:32.734 } 00:26:32.734 ], 00:26:32.734 "allow_any_host": true, 00:26:32.734 "hosts": [], 00:26:32.734 "serial_number": "SPDK00000000000001", 00:26:32.734 "model_number": "SPDK bdev Controller", 00:26:32.734 "max_namespaces": 32, 00:26:32.734 "min_cntlid": 1, 00:26:32.734 "max_cntlid": 65519, 00:26:32.734 "namespaces": [ 00:26:32.734 { 00:26:32.734 "nsid": 1, 00:26:32.734 "bdev_name": "Malloc0", 00:26:32.734 "name": "Malloc0", 00:26:32.734 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:32.734 "eui64": "ABCDEF0123456789", 00:26:32.734 "uuid": "77607751-7fba-4ea3-be23-da7a9d90c0be" 00:26:32.734 } 00:26:32.734 ] 00:26:32.734 } 00:26:32.734 ] 00:26:32.734 22:25:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:32.734 22:25:27 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:32.734 [2024-07-24 22:25:27.675685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:32.734 [2024-07-24 22:25:27.675720] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679567 ] 00:26:32.735 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.735 [2024-07-24 22:25:27.706344] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:32.735 [2024-07-24 22:25:27.706385] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:32.735 [2024-07-24 22:25:27.706389] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:32.735 [2024-07-24 22:25:27.706400] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:32.735 [2024-07-24 22:25:27.706406] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:32.735 [2024-07-24 22:25:27.706961] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:32.735 [2024-07-24 22:25:27.706992] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x821ca0 0 00:26:32.735 [2024-07-24 22:25:27.721053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:32.735 [2024-07-24 22:25:27.721069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:32.735 [2024-07-24 22:25:27.721075] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:32.735 [2024-07-24 22:25:27.721078] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:32.735 [2024-07-24 22:25:27.721112] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.721117] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.721121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.735 [2024-07-24 22:25:27.721132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:32.735 [2024-07-24 22:25:27.721148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.735 [2024-07-24 22:25:27.729054] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.735 [2024-07-24 22:25:27.729061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.735 [2024-07-24 22:25:27.729064] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729069] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x875f80) on tqpair=0x821ca0 00:26:32.735 [2024-07-24 22:25:27.729080] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:32.735 [2024-07-24 22:25:27.729085] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:32.735 [2024-07-24 22:25:27.729090] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:32.735 [2024-07-24 22:25:27.729100] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729104] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729107] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.735 [2024-07-24 22:25:27.729114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.735 [2024-07-24 22:25:27.729126] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.735 [2024-07-24 22:25:27.729389] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.735 [2024-07-24 22:25:27.729400] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.735 [2024-07-24 22:25:27.729404] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729407] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x875f80) on tqpair=0x821ca0 00:26:32.735 [2024-07-24 22:25:27.729413] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:32.735 [2024-07-24 22:25:27.729421] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:32.735 [2024-07-24 22:25:27.729429] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729436] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.735 [2024-07-24 22:25:27.729443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.735 [2024-07-24 22:25:27.729455] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.735 [2024-07-24 22:25:27.729604] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.735 [2024-07-24 22:25:27.729614] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.735 [2024-07-24 22:25:27.729618] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729621] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x875f80) on tqpair=0x821ca0 00:26:32.735 [2024-07-24 22:25:27.729626] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:32.735 [2024-07-24 22:25:27.729635] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:32.735 [2024-07-24 22:25:27.729642] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729645] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729648] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.735 [2024-07-24 22:25:27.729655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.735 [2024-07-24 22:25:27.729667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.735 [2024-07-24 22:25:27.729820] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.735 [2024-07-24 22:25:27.729833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.735 [2024-07-24 22:25:27.729837] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729840] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x875f80) on tqpair=0x821ca0 00:26:32.735 [2024-07-24 22:25:27.729846] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:32.735 [2024-07-24 22:25:27.729856] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.729863] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.735 [2024-07-24 22:25:27.729869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.735 [2024-07-24 22:25:27.729881] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.735 [2024-07-24 22:25:27.730026] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.735 [2024-07-24 22:25:27.730036] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.735 [2024-07-24 22:25:27.730039] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.730050] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x875f80) on tqpair=0x821ca0 00:26:32.735 [2024-07-24 22:25:27.730055] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:32.735 [2024-07-24 22:25:27.730060] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:32.735 [2024-07-24 22:25:27.730069] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:32.735 [2024-07-24 22:25:27.730173] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:32.735 [2024-07-24 22:25:27.730178] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:32.735 [2024-07-24 22:25:27.730187] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.730190] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.730193] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.735 [2024-07-24 22:25:27.730200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.735 [2024-07-24 22:25:27.730213] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.735 [2024-07-24 22:25:27.730358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.735 [2024-07-24 22:25:27.730368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.735 [2024-07-24 22:25:27.730371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.730374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x875f80) on tqpair=0x821ca0 00:26:32.735 [2024-07-24 22:25:27.730379] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:32.735 [2024-07-24 22:25:27.730390] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.730394] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.730397] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.735 [2024-07-24 22:25:27.730403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.735 [2024-07-24 22:25:27.730419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.735 [2024-07-24 22:25:27.730560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.735 [2024-07-24 22:25:27.730570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.735 [2024-07-24 22:25:27.730574] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.735 [2024-07-24 22:25:27.730577] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x875f80) on tqpair=0x821ca0 00:26:32.735 [2024-07-24 22:25:27.730582] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:32.735 [2024-07-24 22:25:27.730586] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:32.735 [2024-07-24 22:25:27.730595] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:32.736 [2024-07-24 22:25:27.730603] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:32.736 [2024-07-24 22:25:27.730612] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.730615] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.730618] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.730624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.736 [2024-07-24 22:25:27.730637] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.736 [2024-07-24 22:25:27.730813] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.736 [2024-07-24 22:25:27.730824] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.736 [2024-07-24 22:25:27.730828] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.730831] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x821ca0): datao=0, datal=4096, cccid=0 00:26:32.736 [2024-07-24 22:25:27.730835] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x875f80) on tqpair(0x821ca0): expected_datao=0, payload_size=4096 00:26:32.736 [2024-07-24 22:25:27.731084] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.731088] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.736 [2024-07-24 22:25:27.776059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.736 [2024-07-24 22:25:27.776062] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776066] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x875f80) on tqpair=0x821ca0 00:26:32.736 [2024-07-24 22:25:27.776074] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:32.736 [2024-07-24 22:25:27.776078] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:32.736 [2024-07-24 22:25:27.776082] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:32.736 [2024-07-24 22:25:27.776086] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:32.736 [2024-07-24 22:25:27.776090] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:32.736 [2024-07-24 22:25:27.776095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:32.736 [2024-07-24 22:25:27.776106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:32.736 [2024-07-24 22:25:27.776115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776118] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776122] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.776129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:32.736 [2024-07-24 22:25:27.776141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.736 [2024-07-24 22:25:27.776375] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.736 [2024-07-24 22:25:27.776385] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.736 [2024-07-24 22:25:27.776388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776392] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x875f80) on tqpair=0x821ca0 00:26:32.736 [2024-07-24 22:25:27.776400] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776406] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.776412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.736 [2024-07-24 22:25:27.776417] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776421] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.776428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.736 [2024-07-24 22:25:27.776433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776436] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.776444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.736 [2024-07-24 22:25:27.776449] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776452] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776455] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.776460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.736 [2024-07-24 22:25:27.776464] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:32.736 [2024-07-24 22:25:27.776477] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:32.736 [2024-07-24 22:25:27.776482] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776485] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776488] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.776494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.736 [2024-07-24 22:25:27.776508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x875f80, cid 0, qid 0 00:26:32.736 [2024-07-24 22:25:27.776512] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8760e0, cid 1, qid 0 00:26:32.736 [2024-07-24 22:25:27.776516] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x876240, cid 2, qid 0 00:26:32.736 [2024-07-24 22:25:27.776522] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8763a0, cid 3, qid 0 00:26:32.736 [2024-07-24 22:25:27.776526] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x876500, cid 4, qid 0 00:26:32.736 [2024-07-24 22:25:27.776822] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.736 [2024-07-24 22:25:27.776832] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.736 [2024-07-24 22:25:27.776835] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776839] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x876500) on tqpair=0x821ca0 00:26:32.736 [2024-07-24 22:25:27.776844] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:32.736 [2024-07-24 22:25:27.776848] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:32.736 [2024-07-24 22:25:27.776859] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.776866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.776872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.736 [2024-07-24 22:25:27.776884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x876500, cid 4, qid 0 00:26:32.736 [2024-07-24 22:25:27.777049] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.736 [2024-07-24 22:25:27.777060] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.736 [2024-07-24 22:25:27.777063] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.777067] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x821ca0): datao=0, datal=4096, cccid=4 00:26:32.736 [2024-07-24 22:25:27.777070] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x876500) on tqpair(0x821ca0): expected_datao=0, payload_size=4096 00:26:32.736 [2024-07-24 22:25:27.777331] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.777335] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.777538] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.736 [2024-07-24 22:25:27.777548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.736 [2024-07-24 22:25:27.777551] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.777554] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x876500) on tqpair=0x821ca0 00:26:32.736 [2024-07-24 22:25:27.777568] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:32.736 [2024-07-24 22:25:27.777588] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.777593] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.777596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.777602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.736 [2024-07-24 22:25:27.777608] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.777611] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.736 [2024-07-24 22:25:27.777614] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x821ca0) 00:26:32.736 [2024-07-24 22:25:27.777619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:32.736 [2024-07-24 22:25:27.777635] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x876500, cid 4, qid 0 00:26:32.736 [2024-07-24 22:25:27.777639] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x876660, cid 5, qid 0 00:26:32.736 [2024-07-24 22:25:27.777834] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.736 [2024-07-24 22:25:27.777845] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.737 [2024-07-24 22:25:27.777848] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.777852] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x821ca0): datao=0, datal=1024, cccid=4 00:26:32.737 [2024-07-24 22:25:27.777856] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x876500) on tqpair(0x821ca0): expected_datao=0, payload_size=1024 00:26:32.737 [2024-07-24 22:25:27.777862] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.777866] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.777871] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.737 [2024-07-24 22:25:27.777875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.737 [2024-07-24 22:25:27.777878] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.777882] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x876660) on tqpair=0x821ca0 00:26:32.737 [2024-07-24 22:25:27.818278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.737 [2024-07-24 22:25:27.818292] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.737 [2024-07-24 22:25:27.818296] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.818299] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x876500) on tqpair=0x821ca0 00:26:32.737 [2024-07-24 22:25:27.818310] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.818313] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.818316] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x821ca0) 00:26:32.737 [2024-07-24 22:25:27.818323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.737 [2024-07-24 22:25:27.818342] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x876500, cid 4, qid 0 00:26:32.737 [2024-07-24 22:25:27.818701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.737 [2024-07-24 22:25:27.818706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.737 [2024-07-24 22:25:27.818709] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.818712] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x821ca0): datao=0, datal=3072, cccid=4 00:26:32.737 [2024-07-24 22:25:27.818716] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x876500) on tqpair(0x821ca0): expected_datao=0, payload_size=3072 00:26:32.737 [2024-07-24 22:25:27.818968] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.818972] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.864051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:32.737 [2024-07-24 22:25:27.864059] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:32.737 [2024-07-24 22:25:27.864062] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.864066] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x876500) on tqpair=0x821ca0 00:26:32.737 [2024-07-24 22:25:27.864074] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.864078] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.864081] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x821ca0) 00:26:32.737 [2024-07-24 22:25:27.864087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.737 [2024-07-24 22:25:27.864102] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x876500, cid 4, qid 0 00:26:32.737 [2024-07-24 22:25:27.864366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:32.737 [2024-07-24 22:25:27.864377] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:32.737 [2024-07-24 22:25:27.864380] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.864383] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x821ca0): datao=0, datal=8, cccid=4 00:26:32.737 [2024-07-24 22:25:27.864387] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x876500) on tqpair(0x821ca0): expected_datao=0, payload_size=8 00:26:32.737 [2024-07-24 22:25:27.864394] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:32.737 [2024-07-24 22:25:27.864397] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:33.000 [2024-07-24 22:25:27.906270] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.000 [2024-07-24 22:25:27.906283] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.000 [2024-07-24 22:25:27.906286] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.000 [2024-07-24 22:25:27.906290] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x876500) on tqpair=0x821ca0 00:26:33.000 ===================================================== 00:26:33.000 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:33.000 ===================================================== 00:26:33.000 Controller Capabilities/Features 00:26:33.000 ================================ 00:26:33.000 Vendor ID: 0000 00:26:33.000 Subsystem Vendor ID: 0000 00:26:33.000 Serial Number: .................... 00:26:33.000 Model Number: ........................................ 00:26:33.000 Firmware Version: 24.01.1 00:26:33.000 Recommended Arb Burst: 0 00:26:33.000 IEEE OUI Identifier: 00 00 00 00:26:33.000 Multi-path I/O 00:26:33.000 May have multiple subsystem ports: No 00:26:33.000 May have multiple controllers: No 00:26:33.000 Associated with SR-IOV VF: No 00:26:33.000 Max Data Transfer Size: 131072 00:26:33.000 Max Number of Namespaces: 0 00:26:33.000 Max Number of I/O Queues: 1024 00:26:33.000 NVMe Specification Version (VS): 1.3 00:26:33.000 NVMe Specification Version (Identify): 1.3 00:26:33.000 Maximum Queue Entries: 128 00:26:33.000 Contiguous Queues Required: Yes 00:26:33.000 Arbitration Mechanisms Supported 00:26:33.000 Weighted Round Robin: Not Supported 00:26:33.000 Vendor Specific: Not Supported 00:26:33.000 Reset Timeout: 15000 ms 00:26:33.000 Doorbell Stride: 4 bytes 00:26:33.000 NVM Subsystem Reset: Not Supported 00:26:33.000 Command Sets Supported 00:26:33.000 NVM Command Set: Supported 00:26:33.000 Boot Partition: Not Supported 00:26:33.000 Memory Page Size Minimum: 4096 bytes 00:26:33.000 Memory Page Size Maximum: 4096 bytes 00:26:33.000 Persistent Memory Region: Not Supported 00:26:33.000 Optional Asynchronous Events Supported 00:26:33.000 Namespace Attribute Notices: Not Supported 00:26:33.000 Firmware Activation Notices: Not Supported 00:26:33.000 ANA Change Notices: Not Supported 00:26:33.000 PLE Aggregate Log Change Notices: Not Supported 00:26:33.000 LBA Status Info Alert Notices: Not Supported 00:26:33.000 EGE Aggregate Log Change Notices: Not Supported 00:26:33.000 Normal NVM Subsystem Shutdown event: Not Supported 00:26:33.000 Zone Descriptor Change Notices: Not Supported 00:26:33.000 Discovery Log Change Notices: Supported 00:26:33.000 Controller Attributes 00:26:33.000 128-bit Host Identifier: Not Supported 00:26:33.000 Non-Operational Permissive Mode: Not Supported 00:26:33.000 NVM Sets: Not Supported 00:26:33.000 Read Recovery Levels: Not Supported 00:26:33.000 Endurance Groups: Not Supported 00:26:33.000 Predictable Latency Mode: Not Supported 00:26:33.000 Traffic Based Keep ALive: Not Supported 00:26:33.000 Namespace Granularity: Not Supported 00:26:33.000 SQ Associations: Not Supported 00:26:33.000 UUID List: Not Supported 00:26:33.000 Multi-Domain Subsystem: Not Supported 00:26:33.000 Fixed Capacity Management: Not Supported 00:26:33.000 Variable Capacity Management: Not Supported 00:26:33.000 Delete Endurance Group: Not Supported 00:26:33.000 Delete NVM Set: Not Supported 00:26:33.000 Extended LBA Formats Supported: Not Supported 00:26:33.000 Flexible Data Placement Supported: Not Supported 00:26:33.000 00:26:33.000 Controller Memory Buffer Support 00:26:33.000 ================================ 00:26:33.000 Supported: No 00:26:33.000 00:26:33.000 Persistent Memory Region Support 00:26:33.000 ================================ 00:26:33.000 Supported: No 00:26:33.000 00:26:33.000 Admin Command Set Attributes 00:26:33.000 ============================ 00:26:33.000 Security Send/Receive: Not Supported 00:26:33.000 Format NVM: Not Supported 00:26:33.000 Firmware Activate/Download: Not Supported 00:26:33.000 Namespace Management: Not Supported 00:26:33.000 Device Self-Test: Not Supported 00:26:33.000 Directives: Not Supported 00:26:33.000 NVMe-MI: Not Supported 00:26:33.000 Virtualization Management: Not Supported 00:26:33.000 Doorbell Buffer Config: Not Supported 00:26:33.000 Get LBA Status Capability: Not Supported 00:26:33.000 Command & Feature Lockdown Capability: Not Supported 00:26:33.000 Abort Command Limit: 1 00:26:33.000 Async Event Request Limit: 4 00:26:33.000 Number of Firmware Slots: N/A 00:26:33.000 Firmware Slot 1 Read-Only: N/A 00:26:33.000 Firmware Activation Without Reset: N/A 00:26:33.000 Multiple Update Detection Support: N/A 00:26:33.000 Firmware Update Granularity: No Information Provided 00:26:33.000 Per-Namespace SMART Log: No 00:26:33.000 Asymmetric Namespace Access Log Page: Not Supported 00:26:33.000 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:33.000 Command Effects Log Page: Not Supported 00:26:33.000 Get Log Page Extended Data: Supported 00:26:33.000 Telemetry Log Pages: Not Supported 00:26:33.000 Persistent Event Log Pages: Not Supported 00:26:33.000 Supported Log Pages Log Page: May Support 00:26:33.000 Commands Supported & Effects Log Page: Not Supported 00:26:33.000 Feature Identifiers & Effects Log Page:May Support 00:26:33.000 NVMe-MI Commands & Effects Log Page: May Support 00:26:33.000 Data Area 4 for Telemetry Log: Not Supported 00:26:33.000 Error Log Page Entries Supported: 128 00:26:33.000 Keep Alive: Not Supported 00:26:33.000 00:26:33.000 NVM Command Set Attributes 00:26:33.000 ========================== 00:26:33.000 Submission Queue Entry Size 00:26:33.000 Max: 1 00:26:33.000 Min: 1 00:26:33.000 Completion Queue Entry Size 00:26:33.000 Max: 1 00:26:33.000 Min: 1 00:26:33.000 Number of Namespaces: 0 00:26:33.000 Compare Command: Not Supported 00:26:33.000 Write Uncorrectable Command: Not Supported 00:26:33.000 Dataset Management Command: Not Supported 00:26:33.000 Write Zeroes Command: Not Supported 00:26:33.000 Set Features Save Field: Not Supported 00:26:33.000 Reservations: Not Supported 00:26:33.001 Timestamp: Not Supported 00:26:33.001 Copy: Not Supported 00:26:33.001 Volatile Write Cache: Not Present 00:26:33.001 Atomic Write Unit (Normal): 1 00:26:33.001 Atomic Write Unit (PFail): 1 00:26:33.001 Atomic Compare & Write Unit: 1 00:26:33.001 Fused Compare & Write: Supported 00:26:33.001 Scatter-Gather List 00:26:33.001 SGL Command Set: Supported 00:26:33.001 SGL Keyed: Supported 00:26:33.001 SGL Bit Bucket Descriptor: Not Supported 00:26:33.001 SGL Metadata Pointer: Not Supported 00:26:33.001 Oversized SGL: Not Supported 00:26:33.001 SGL Metadata Address: Not Supported 00:26:33.001 SGL Offset: Supported 00:26:33.001 Transport SGL Data Block: Not Supported 00:26:33.001 Replay Protected Memory Block: Not Supported 00:26:33.001 00:26:33.001 Firmware Slot Information 00:26:33.001 ========================= 00:26:33.001 Active slot: 0 00:26:33.001 00:26:33.001 00:26:33.001 Error Log 00:26:33.001 ========= 00:26:33.001 00:26:33.001 Active Namespaces 00:26:33.001 ================= 00:26:33.001 Discovery Log Page 00:26:33.001 ================== 00:26:33.001 Generation Counter: 2 00:26:33.001 Number of Records: 2 00:26:33.001 Record Format: 0 00:26:33.001 00:26:33.001 Discovery Log Entry 0 00:26:33.001 ---------------------- 00:26:33.001 Transport Type: 3 (TCP) 00:26:33.001 Address Family: 1 (IPv4) 00:26:33.001 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:33.001 Entry Flags: 00:26:33.001 Duplicate Returned Information: 1 00:26:33.001 Explicit Persistent Connection Support for Discovery: 1 00:26:33.001 Transport Requirements: 00:26:33.001 Secure Channel: Not Required 00:26:33.001 Port ID: 0 (0x0000) 00:26:33.001 Controller ID: 65535 (0xffff) 00:26:33.001 Admin Max SQ Size: 128 00:26:33.001 Transport Service Identifier: 4420 00:26:33.001 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:33.001 Transport Address: 10.0.0.2 00:26:33.001 Discovery Log Entry 1 00:26:33.001 ---------------------- 00:26:33.001 Transport Type: 3 (TCP) 00:26:33.001 Address Family: 1 (IPv4) 00:26:33.001 Subsystem Type: 2 (NVM Subsystem) 00:26:33.001 Entry Flags: 00:26:33.001 Duplicate Returned Information: 0 00:26:33.001 Explicit Persistent Connection Support for Discovery: 0 00:26:33.001 Transport Requirements: 00:26:33.001 Secure Channel: Not Required 00:26:33.001 Port ID: 0 (0x0000) 00:26:33.001 Controller ID: 65535 (0xffff) 00:26:33.001 Admin Max SQ Size: 128 00:26:33.001 Transport Service Identifier: 4420 00:26:33.001 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:33.001 Transport Address: 10.0.0.2 [2024-07-24 22:25:27.906370] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:33.001 [2024-07-24 22:25:27.906384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.001 [2024-07-24 22:25:27.906389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.001 [2024-07-24 22:25:27.906394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.001 [2024-07-24 22:25:27.906400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.001 [2024-07-24 22:25:27.906408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.906411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.906414] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x821ca0) 00:26:33.001 [2024-07-24 22:25:27.906421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.001 [2024-07-24 22:25:27.906435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8763a0, cid 3, qid 0 00:26:33.001 [2024-07-24 22:25:27.906591] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.001 [2024-07-24 22:25:27.906600] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.001 [2024-07-24 22:25:27.906603] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.906607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8763a0) on tqpair=0x821ca0 00:26:33.001 [2024-07-24 22:25:27.906614] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.906618] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.906621] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x821ca0) 00:26:33.001 [2024-07-24 22:25:27.906627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.001 [2024-07-24 22:25:27.906643] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8763a0, cid 3, qid 0 00:26:33.001 [2024-07-24 22:25:27.906810] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.001 [2024-07-24 22:25:27.906819] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.001 [2024-07-24 22:25:27.906822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.906826] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8763a0) on tqpair=0x821ca0 00:26:33.001 [2024-07-24 22:25:27.906830] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:33.001 [2024-07-24 22:25:27.906837] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:33.001 [2024-07-24 22:25:27.906847] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.906851] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.906854] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x821ca0) 00:26:33.001 [2024-07-24 22:25:27.906860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.001 [2024-07-24 22:25:27.906872] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8763a0, cid 3, qid 0 00:26:33.001 [2024-07-24 22:25:27.907017] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.001 [2024-07-24 22:25:27.907027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.001 [2024-07-24 22:25:27.907030] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.907033] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8763a0) on tqpair=0x821ca0 00:26:33.001 [2024-07-24 22:25:27.907050] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.907053] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.907057] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x821ca0) 00:26:33.001 [2024-07-24 22:25:27.907063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.001 [2024-07-24 22:25:27.907075] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8763a0, cid 3, qid 0 00:26:33.001 [2024-07-24 22:25:27.911050] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.001 [2024-07-24 22:25:27.911063] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.001 [2024-07-24 22:25:27.911066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.911069] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8763a0) on tqpair=0x821ca0 00:26:33.001 [2024-07-24 22:25:27.911081] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.911085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.911088] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x821ca0) 00:26:33.001 [2024-07-24 22:25:27.911095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.001 [2024-07-24 22:25:27.911108] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8763a0, cid 3, qid 0 00:26:33.001 [2024-07-24 22:25:27.911342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.001 [2024-07-24 22:25:27.911352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.001 [2024-07-24 22:25:27.911355] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.001 [2024-07-24 22:25:27.911358] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8763a0) on tqpair=0x821ca0 00:26:33.001 [2024-07-24 22:25:27.911367] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:26:33.001 00:26:33.001 22:25:27 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:33.001 [2024-07-24 22:25:27.944922] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:33.001 [2024-07-24 22:25:27.944973] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679572 ] 00:26:33.001 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.001 [2024-07-24 22:25:27.973150] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:33.001 [2024-07-24 22:25:27.973191] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:33.001 [2024-07-24 22:25:27.973196] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:33.001 [2024-07-24 22:25:27.973206] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:33.001 [2024-07-24 22:25:27.973213] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:33.001 [2024-07-24 22:25:27.973760] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:33.001 [2024-07-24 22:25:27.973786] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb3cca0 0 00:26:33.001 [2024-07-24 22:25:27.984055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:33.002 [2024-07-24 22:25:27.984072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:33.002 [2024-07-24 22:25:27.984076] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:33.002 [2024-07-24 22:25:27.984079] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:33.002 [2024-07-24 22:25:27.984110] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.984115] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.984119] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.002 [2024-07-24 22:25:27.984129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:33.002 [2024-07-24 22:25:27.984145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.002 [2024-07-24 22:25:27.992053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.002 [2024-07-24 22:25:27.992061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.002 [2024-07-24 22:25:27.992064] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992068] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb90f80) on tqpair=0xb3cca0 00:26:33.002 [2024-07-24 22:25:27.992075] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:33.002 [2024-07-24 22:25:27.992081] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:33.002 [2024-07-24 22:25:27.992086] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:33.002 [2024-07-24 22:25:27.992095] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992102] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.002 [2024-07-24 22:25:27.992109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.002 [2024-07-24 22:25:27.992121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.002 [2024-07-24 22:25:27.992361] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.002 [2024-07-24 22:25:27.992375] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.002 [2024-07-24 22:25:27.992378] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992382] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb90f80) on tqpair=0xb3cca0 00:26:33.002 [2024-07-24 22:25:27.992387] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:33.002 [2024-07-24 22:25:27.992397] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:33.002 [2024-07-24 22:25:27.992409] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992416] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.002 [2024-07-24 22:25:27.992424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.002 [2024-07-24 22:25:27.992438] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.002 [2024-07-24 22:25:27.992589] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.002 [2024-07-24 22:25:27.992598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.002 [2024-07-24 22:25:27.992601] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992605] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb90f80) on tqpair=0xb3cca0 00:26:33.002 [2024-07-24 22:25:27.992610] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:33.002 [2024-07-24 22:25:27.992619] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:33.002 [2024-07-24 22:25:27.992626] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992629] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992633] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.002 [2024-07-24 22:25:27.992639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.002 [2024-07-24 22:25:27.992652] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.002 [2024-07-24 22:25:27.992797] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.002 [2024-07-24 22:25:27.992807] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.002 [2024-07-24 22:25:27.992810] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992814] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb90f80) on tqpair=0xb3cca0 00:26:33.002 [2024-07-24 22:25:27.992819] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:33.002 [2024-07-24 22:25:27.992830] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.992837] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.002 [2024-07-24 22:25:27.992844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.002 [2024-07-24 22:25:27.992857] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.002 [2024-07-24 22:25:27.992999] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.002 [2024-07-24 22:25:27.993009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.002 [2024-07-24 22:25:27.993012] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993016] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb90f80) on tqpair=0xb3cca0 00:26:33.002 [2024-07-24 22:25:27.993020] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:33.002 [2024-07-24 22:25:27.993025] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:33.002 [2024-07-24 22:25:27.993033] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:33.002 [2024-07-24 22:25:27.993138] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:33.002 [2024-07-24 22:25:27.993145] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:33.002 [2024-07-24 22:25:27.993153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993157] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993160] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.002 [2024-07-24 22:25:27.993167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.002 [2024-07-24 22:25:27.993180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.002 [2024-07-24 22:25:27.993329] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.002 [2024-07-24 22:25:27.993338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.002 [2024-07-24 22:25:27.993341] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993345] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb90f80) on tqpair=0xb3cca0 00:26:33.002 [2024-07-24 22:25:27.993350] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:33.002 [2024-07-24 22:25:27.993361] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.002 [2024-07-24 22:25:27.993374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.002 [2024-07-24 22:25:27.993386] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.002 [2024-07-24 22:25:27.993533] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.002 [2024-07-24 22:25:27.993543] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.002 [2024-07-24 22:25:27.993546] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993549] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb90f80) on tqpair=0xb3cca0 00:26:33.002 [2024-07-24 22:25:27.993554] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:33.002 [2024-07-24 22:25:27.993558] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:33.002 [2024-07-24 22:25:27.993567] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:33.002 [2024-07-24 22:25:27.993576] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:33.002 [2024-07-24 22:25:27.993583] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993587] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993590] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.002 [2024-07-24 22:25:27.993597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.002 [2024-07-24 22:25:27.993610] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.002 [2024-07-24 22:25:27.993789] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:33.002 [2024-07-24 22:25:27.993800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:33.002 [2024-07-24 22:25:27.993803] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993806] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb3cca0): datao=0, datal=4096, cccid=0 00:26:33.002 [2024-07-24 22:25:27.993813] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb90f80) on tqpair(0xb3cca0): expected_datao=0, payload_size=4096 00:26:33.002 [2024-07-24 22:25:27.993820] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.993824] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.994111] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.002 [2024-07-24 22:25:27.994117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.002 [2024-07-24 22:25:27.994120] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.002 [2024-07-24 22:25:27.994123] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb90f80) on tqpair=0xb3cca0 00:26:33.003 [2024-07-24 22:25:27.994131] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:33.003 [2024-07-24 22:25:27.994135] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:33.003 [2024-07-24 22:25:27.994139] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:33.003 [2024-07-24 22:25:27.994143] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:33.003 [2024-07-24 22:25:27.994147] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:33.003 [2024-07-24 22:25:27.994151] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.994162] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.994169] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994173] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994176] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.003 [2024-07-24 22:25:27.994183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:33.003 [2024-07-24 22:25:27.994195] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.003 [2024-07-24 22:25:27.994346] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.003 [2024-07-24 22:25:27.994356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.003 [2024-07-24 22:25:27.994359] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994362] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb90f80) on tqpair=0xb3cca0 00:26:33.003 [2024-07-24 22:25:27.994369] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994373] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994376] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb3cca0) 00:26:33.003 [2024-07-24 22:25:27.994383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.003 [2024-07-24 22:25:27.994388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994391] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994394] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb3cca0) 00:26:33.003 [2024-07-24 22:25:27.994399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.003 [2024-07-24 22:25:27.994404] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994408] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994411] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb3cca0) 00:26:33.003 [2024-07-24 22:25:27.994416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.003 [2024-07-24 22:25:27.994424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994427] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994430] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.003 [2024-07-24 22:25:27.994435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.003 [2024-07-24 22:25:27.994439] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.994451] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.994457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb3cca0) 00:26:33.003 [2024-07-24 22:25:27.994470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.003 [2024-07-24 22:25:27.994484] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb90f80, cid 0, qid 0 00:26:33.003 [2024-07-24 22:25:27.994488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb910e0, cid 1, qid 0 00:26:33.003 [2024-07-24 22:25:27.994492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91240, cid 2, qid 0 00:26:33.003 [2024-07-24 22:25:27.994496] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.003 [2024-07-24 22:25:27.994500] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91500, cid 4, qid 0 00:26:33.003 [2024-07-24 22:25:27.994680] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.003 [2024-07-24 22:25:27.994690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.003 [2024-07-24 22:25:27.994693] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994697] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91500) on tqpair=0xb3cca0 00:26:33.003 [2024-07-24 22:25:27.994702] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:33.003 [2024-07-24 22:25:27.994707] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.994716] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.994725] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.994732] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994735] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994738] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb3cca0) 00:26:33.003 [2024-07-24 22:25:27.994745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:33.003 [2024-07-24 22:25:27.994758] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91500, cid 4, qid 0 00:26:33.003 [2024-07-24 22:25:27.994908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.003 [2024-07-24 22:25:27.994918] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.003 [2024-07-24 22:25:27.994921] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.994924] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91500) on tqpair=0xb3cca0 00:26:33.003 [2024-07-24 22:25:27.994978] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.994989] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.994997] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb3cca0) 00:26:33.003 [2024-07-24 22:25:27.995010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.003 [2024-07-24 22:25:27.995022] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91500, cid 4, qid 0 00:26:33.003 [2024-07-24 22:25:27.995268] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:33.003 [2024-07-24 22:25:27.995278] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:33.003 [2024-07-24 22:25:27.995282] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995285] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb3cca0): datao=0, datal=4096, cccid=4 00:26:33.003 [2024-07-24 22:25:27.995289] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb91500) on tqpair(0xb3cca0): expected_datao=0, payload_size=4096 00:26:33.003 [2024-07-24 22:25:27.995296] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995299] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995567] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.003 [2024-07-24 22:25:27.995572] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.003 [2024-07-24 22:25:27.995575] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995578] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91500) on tqpair=0xb3cca0 00:26:33.003 [2024-07-24 22:25:27.995595] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:33.003 [2024-07-24 22:25:27.995603] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.995613] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.995619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995626] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb3cca0) 00:26:33.003 [2024-07-24 22:25:27.995633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.003 [2024-07-24 22:25:27.995645] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91500, cid 4, qid 0 00:26:33.003 [2024-07-24 22:25:27.995826] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:33.003 [2024-07-24 22:25:27.995837] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:33.003 [2024-07-24 22:25:27.995840] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995844] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb3cca0): datao=0, datal=4096, cccid=4 00:26:33.003 [2024-07-24 22:25:27.995848] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb91500) on tqpair(0xb3cca0): expected_datao=0, payload_size=4096 00:26:33.003 [2024-07-24 22:25:27.995854] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.995858] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.996126] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.003 [2024-07-24 22:25:27.996133] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.003 [2024-07-24 22:25:27.996141] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.003 [2024-07-24 22:25:27.996144] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91500) on tqpair=0xb3cca0 00:26:33.003 [2024-07-24 22:25:27.996159] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:33.003 [2024-07-24 22:25:27.996170] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:33.004 [2024-07-24 22:25:27.996177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.996190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.004 [2024-07-24 22:25:27.996203] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91500, cid 4, qid 0 00:26:33.004 [2024-07-24 22:25:27.996356] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:33.004 [2024-07-24 22:25:27.996365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:33.004 [2024-07-24 22:25:27.996368] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996372] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb3cca0): datao=0, datal=4096, cccid=4 00:26:33.004 [2024-07-24 22:25:27.996376] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb91500) on tqpair(0xb3cca0): expected_datao=0, payload_size=4096 00:26:33.004 [2024-07-24 22:25:27.996627] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996632] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996748] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.004 [2024-07-24 22:25:27.996758] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.004 [2024-07-24 22:25:27.996761] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996765] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91500) on tqpair=0xb3cca0 00:26:33.004 [2024-07-24 22:25:27.996772] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:33.004 [2024-07-24 22:25:27.996782] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:33.004 [2024-07-24 22:25:27.996791] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:33.004 [2024-07-24 22:25:27.996797] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:33.004 [2024-07-24 22:25:27.996801] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:33.004 [2024-07-24 22:25:27.996805] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:33.004 [2024-07-24 22:25:27.996811] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:33.004 [2024-07-24 22:25:27.996816] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:33.004 [2024-07-24 22:25:27.996830] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996834] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996839] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.996846] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.004 [2024-07-24 22:25:27.996856] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996860] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.996863] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.996868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.004 [2024-07-24 22:25:27.996884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91500, cid 4, qid 0 00:26:33.004 [2024-07-24 22:25:27.996889] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91660, cid 5, qid 0 00:26:33.004 [2024-07-24 22:25:27.997061] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.004 [2024-07-24 22:25:27.997072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.004 [2024-07-24 22:25:27.997075] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997078] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91500) on tqpair=0xb3cca0 00:26:33.004 [2024-07-24 22:25:27.997084] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.004 [2024-07-24 22:25:27.997090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.004 [2024-07-24 22:25:27.997094] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997099] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91660) on tqpair=0xb3cca0 00:26:33.004 [2024-07-24 22:25:27.997110] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.997124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.004 [2024-07-24 22:25:27.997137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91660, cid 5, qid 0 00:26:33.004 [2024-07-24 22:25:27.997284] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.004 [2024-07-24 22:25:27.997294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.004 [2024-07-24 22:25:27.997297] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997301] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91660) on tqpair=0xb3cca0 00:26:33.004 [2024-07-24 22:25:27.997311] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997318] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.997325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.004 [2024-07-24 22:25:27.997337] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91660, cid 5, qid 0 00:26:33.004 [2024-07-24 22:25:27.997487] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.004 [2024-07-24 22:25:27.997497] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.004 [2024-07-24 22:25:27.997500] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997503] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91660) on tqpair=0xb3cca0 00:26:33.004 [2024-07-24 22:25:27.997513] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.997527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.004 [2024-07-24 22:25:27.997552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91660, cid 5, qid 0 00:26:33.004 [2024-07-24 22:25:27.997696] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.004 [2024-07-24 22:25:27.997706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.004 [2024-07-24 22:25:27.997709] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997712] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91660) on tqpair=0xb3cca0 00:26:33.004 [2024-07-24 22:25:27.997725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.997738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.004 [2024-07-24 22:25:27.997744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997747] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997750] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.997756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.004 [2024-07-24 22:25:27.997762] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997765] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997768] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.997773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.004 [2024-07-24 22:25:27.997779] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997782] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.004 [2024-07-24 22:25:27.997785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb3cca0) 00:26:33.004 [2024-07-24 22:25:27.997790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.005 [2024-07-24 22:25:27.997803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91660, cid 5, qid 0 00:26:33.005 [2024-07-24 22:25:27.997808] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91500, cid 4, qid 0 00:26:33.005 [2024-07-24 22:25:27.997812] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb917c0, cid 6, qid 0 00:26:33.005 [2024-07-24 22:25:27.997816] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91920, cid 7, qid 0 00:26:33.005 [2024-07-24 22:25:27.998134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:33.005 [2024-07-24 22:25:27.998145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:33.005 [2024-07-24 22:25:27.998148] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998151] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb3cca0): datao=0, datal=8192, cccid=5 00:26:33.005 [2024-07-24 22:25:27.998155] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb91660) on tqpair(0xb3cca0): expected_datao=0, payload_size=8192 00:26:33.005 [2024-07-24 22:25:27.998162] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998165] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998170] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:33.005 [2024-07-24 22:25:27.998175] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:33.005 [2024-07-24 22:25:27.998182] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998185] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb3cca0): datao=0, datal=512, cccid=4 00:26:33.005 [2024-07-24 22:25:27.998189] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb91500) on tqpair(0xb3cca0): expected_datao=0, payload_size=512 00:26:33.005 [2024-07-24 22:25:27.998195] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998198] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998203] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:33.005 [2024-07-24 22:25:27.998208] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:33.005 [2024-07-24 22:25:27.998211] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998215] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb3cca0): datao=0, datal=512, cccid=6 00:26:33.005 [2024-07-24 22:25:27.998218] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb917c0) on tqpair(0xb3cca0): expected_datao=0, payload_size=512 00:26:33.005 [2024-07-24 22:25:27.998224] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998227] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998232] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:33.005 [2024-07-24 22:25:27.998237] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:33.005 [2024-07-24 22:25:27.998240] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998243] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb3cca0): datao=0, datal=4096, cccid=7 00:26:33.005 [2024-07-24 22:25:27.998246] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb91920) on tqpair(0xb3cca0): expected_datao=0, payload_size=4096 00:26:33.005 [2024-07-24 22:25:27.998253] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998256] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.005 [2024-07-24 22:25:27.998439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.005 [2024-07-24 22:25:27.998442] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998445] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91660) on tqpair=0xb3cca0 00:26:33.005 [2024-07-24 22:25:27.998457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.005 [2024-07-24 22:25:27.998463] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.005 [2024-07-24 22:25:27.998466] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998469] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91500) on tqpair=0xb3cca0 00:26:33.005 [2024-07-24 22:25:27.998477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.005 [2024-07-24 22:25:27.998482] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.005 [2024-07-24 22:25:27.998485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998488] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb917c0) on tqpair=0xb3cca0 00:26:33.005 [2024-07-24 22:25:27.998494] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.005 [2024-07-24 22:25:27.998498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.005 [2024-07-24 22:25:27.998501] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.005 [2024-07-24 22:25:27.998505] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91920) on tqpair=0xb3cca0 00:26:33.005 ===================================================== 00:26:33.005 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:33.005 ===================================================== 00:26:33.005 Controller Capabilities/Features 00:26:33.005 ================================ 00:26:33.005 Vendor ID: 8086 00:26:33.005 Subsystem Vendor ID: 8086 00:26:33.005 Serial Number: SPDK00000000000001 00:26:33.005 Model Number: SPDK bdev Controller 00:26:33.005 Firmware Version: 24.01.1 00:26:33.005 Recommended Arb Burst: 6 00:26:33.005 IEEE OUI Identifier: e4 d2 5c 00:26:33.005 Multi-path I/O 00:26:33.005 May have multiple subsystem ports: Yes 00:26:33.005 May have multiple controllers: Yes 00:26:33.005 Associated with SR-IOV VF: No 00:26:33.005 Max Data Transfer Size: 131072 00:26:33.005 Max Number of Namespaces: 32 00:26:33.005 Max Number of I/O Queues: 127 00:26:33.005 NVMe Specification Version (VS): 1.3 00:26:33.005 NVMe Specification Version (Identify): 1.3 00:26:33.005 Maximum Queue Entries: 128 00:26:33.005 Contiguous Queues Required: Yes 00:26:33.005 Arbitration Mechanisms Supported 00:26:33.005 Weighted Round Robin: Not Supported 00:26:33.005 Vendor Specific: Not Supported 00:26:33.005 Reset Timeout: 15000 ms 00:26:33.005 Doorbell Stride: 4 bytes 00:26:33.005 NVM Subsystem Reset: Not Supported 00:26:33.005 Command Sets Supported 00:26:33.005 NVM Command Set: Supported 00:26:33.005 Boot Partition: Not Supported 00:26:33.005 Memory Page Size Minimum: 4096 bytes 00:26:33.005 Memory Page Size Maximum: 4096 bytes 00:26:33.005 Persistent Memory Region: Not Supported 00:26:33.005 Optional Asynchronous Events Supported 00:26:33.005 Namespace Attribute Notices: Supported 00:26:33.005 Firmware Activation Notices: Not Supported 00:26:33.005 ANA Change Notices: Not Supported 00:26:33.005 PLE Aggregate Log Change Notices: Not Supported 00:26:33.005 LBA Status Info Alert Notices: Not Supported 00:26:33.005 EGE Aggregate Log Change Notices: Not Supported 00:26:33.005 Normal NVM Subsystem Shutdown event: Not Supported 00:26:33.005 Zone Descriptor Change Notices: Not Supported 00:26:33.005 Discovery Log Change Notices: Not Supported 00:26:33.005 Controller Attributes 00:26:33.005 128-bit Host Identifier: Supported 00:26:33.005 Non-Operational Permissive Mode: Not Supported 00:26:33.005 NVM Sets: Not Supported 00:26:33.005 Read Recovery Levels: Not Supported 00:26:33.005 Endurance Groups: Not Supported 00:26:33.005 Predictable Latency Mode: Not Supported 00:26:33.005 Traffic Based Keep ALive: Not Supported 00:26:33.005 Namespace Granularity: Not Supported 00:26:33.005 SQ Associations: Not Supported 00:26:33.005 UUID List: Not Supported 00:26:33.005 Multi-Domain Subsystem: Not Supported 00:26:33.005 Fixed Capacity Management: Not Supported 00:26:33.005 Variable Capacity Management: Not Supported 00:26:33.005 Delete Endurance Group: Not Supported 00:26:33.005 Delete NVM Set: Not Supported 00:26:33.005 Extended LBA Formats Supported: Not Supported 00:26:33.005 Flexible Data Placement Supported: Not Supported 00:26:33.005 00:26:33.005 Controller Memory Buffer Support 00:26:33.005 ================================ 00:26:33.005 Supported: No 00:26:33.005 00:26:33.005 Persistent Memory Region Support 00:26:33.005 ================================ 00:26:33.005 Supported: No 00:26:33.005 00:26:33.005 Admin Command Set Attributes 00:26:33.005 ============================ 00:26:33.005 Security Send/Receive: Not Supported 00:26:33.005 Format NVM: Not Supported 00:26:33.005 Firmware Activate/Download: Not Supported 00:26:33.005 Namespace Management: Not Supported 00:26:33.005 Device Self-Test: Not Supported 00:26:33.005 Directives: Not Supported 00:26:33.005 NVMe-MI: Not Supported 00:26:33.005 Virtualization Management: Not Supported 00:26:33.005 Doorbell Buffer Config: Not Supported 00:26:33.005 Get LBA Status Capability: Not Supported 00:26:33.005 Command & Feature Lockdown Capability: Not Supported 00:26:33.005 Abort Command Limit: 4 00:26:33.005 Async Event Request Limit: 4 00:26:33.005 Number of Firmware Slots: N/A 00:26:33.005 Firmware Slot 1 Read-Only: N/A 00:26:33.005 Firmware Activation Without Reset: N/A 00:26:33.005 Multiple Update Detection Support: N/A 00:26:33.005 Firmware Update Granularity: No Information Provided 00:26:33.005 Per-Namespace SMART Log: No 00:26:33.005 Asymmetric Namespace Access Log Page: Not Supported 00:26:33.005 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:33.005 Command Effects Log Page: Supported 00:26:33.006 Get Log Page Extended Data: Supported 00:26:33.006 Telemetry Log Pages: Not Supported 00:26:33.006 Persistent Event Log Pages: Not Supported 00:26:33.006 Supported Log Pages Log Page: May Support 00:26:33.006 Commands Supported & Effects Log Page: Not Supported 00:26:33.006 Feature Identifiers & Effects Log Page:May Support 00:26:33.006 NVMe-MI Commands & Effects Log Page: May Support 00:26:33.006 Data Area 4 for Telemetry Log: Not Supported 00:26:33.006 Error Log Page Entries Supported: 128 00:26:33.006 Keep Alive: Supported 00:26:33.006 Keep Alive Granularity: 10000 ms 00:26:33.006 00:26:33.006 NVM Command Set Attributes 00:26:33.006 ========================== 00:26:33.006 Submission Queue Entry Size 00:26:33.006 Max: 64 00:26:33.006 Min: 64 00:26:33.006 Completion Queue Entry Size 00:26:33.006 Max: 16 00:26:33.006 Min: 16 00:26:33.006 Number of Namespaces: 32 00:26:33.006 Compare Command: Supported 00:26:33.006 Write Uncorrectable Command: Not Supported 00:26:33.006 Dataset Management Command: Supported 00:26:33.006 Write Zeroes Command: Supported 00:26:33.006 Set Features Save Field: Not Supported 00:26:33.006 Reservations: Supported 00:26:33.006 Timestamp: Not Supported 00:26:33.006 Copy: Supported 00:26:33.006 Volatile Write Cache: Present 00:26:33.006 Atomic Write Unit (Normal): 1 00:26:33.006 Atomic Write Unit (PFail): 1 00:26:33.006 Atomic Compare & Write Unit: 1 00:26:33.006 Fused Compare & Write: Supported 00:26:33.006 Scatter-Gather List 00:26:33.006 SGL Command Set: Supported 00:26:33.006 SGL Keyed: Supported 00:26:33.006 SGL Bit Bucket Descriptor: Not Supported 00:26:33.006 SGL Metadata Pointer: Not Supported 00:26:33.006 Oversized SGL: Not Supported 00:26:33.006 SGL Metadata Address: Not Supported 00:26:33.006 SGL Offset: Supported 00:26:33.006 Transport SGL Data Block: Not Supported 00:26:33.006 Replay Protected Memory Block: Not Supported 00:26:33.006 00:26:33.006 Firmware Slot Information 00:26:33.006 ========================= 00:26:33.006 Active slot: 1 00:26:33.006 Slot 1 Firmware Revision: 24.01.1 00:26:33.006 00:26:33.006 00:26:33.006 Commands Supported and Effects 00:26:33.006 ============================== 00:26:33.006 Admin Commands 00:26:33.006 -------------- 00:26:33.006 Get Log Page (02h): Supported 00:26:33.006 Identify (06h): Supported 00:26:33.006 Abort (08h): Supported 00:26:33.006 Set Features (09h): Supported 00:26:33.006 Get Features (0Ah): Supported 00:26:33.006 Asynchronous Event Request (0Ch): Supported 00:26:33.006 Keep Alive (18h): Supported 00:26:33.006 I/O Commands 00:26:33.006 ------------ 00:26:33.006 Flush (00h): Supported LBA-Change 00:26:33.006 Write (01h): Supported LBA-Change 00:26:33.006 Read (02h): Supported 00:26:33.006 Compare (05h): Supported 00:26:33.006 Write Zeroes (08h): Supported LBA-Change 00:26:33.006 Dataset Management (09h): Supported LBA-Change 00:26:33.006 Copy (19h): Supported LBA-Change 00:26:33.006 Unknown (79h): Supported LBA-Change 00:26:33.006 Unknown (7Ah): Supported 00:26:33.006 00:26:33.006 Error Log 00:26:33.006 ========= 00:26:33.006 00:26:33.006 Arbitration 00:26:33.006 =========== 00:26:33.006 Arbitration Burst: 1 00:26:33.006 00:26:33.006 Power Management 00:26:33.006 ================ 00:26:33.006 Number of Power States: 1 00:26:33.006 Current Power State: Power State #0 00:26:33.006 Power State #0: 00:26:33.006 Max Power: 0.00 W 00:26:33.006 Non-Operational State: Operational 00:26:33.006 Entry Latency: Not Reported 00:26:33.006 Exit Latency: Not Reported 00:26:33.006 Relative Read Throughput: 0 00:26:33.006 Relative Read Latency: 0 00:26:33.006 Relative Write Throughput: 0 00:26:33.006 Relative Write Latency: 0 00:26:33.006 Idle Power: Not Reported 00:26:33.006 Active Power: Not Reported 00:26:33.006 Non-Operational Permissive Mode: Not Supported 00:26:33.006 00:26:33.006 Health Information 00:26:33.006 ================== 00:26:33.006 Critical Warnings: 00:26:33.006 Available Spare Space: OK 00:26:33.006 Temperature: OK 00:26:33.006 Device Reliability: OK 00:26:33.006 Read Only: No 00:26:33.006 Volatile Memory Backup: OK 00:26:33.006 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:33.006 Temperature Threshold: [2024-07-24 22:25:27.998591] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:27.998596] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:27.998599] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xb3cca0) 00:26:33.006 [2024-07-24 22:25:27.998605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.006 [2024-07-24 22:25:27.998620] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb91920, cid 7, qid 0 00:26:33.006 [2024-07-24 22:25:27.998780] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.006 [2024-07-24 22:25:27.998790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.006 [2024-07-24 22:25:27.998793] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:27.998797] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb91920) on tqpair=0xb3cca0 00:26:33.006 [2024-07-24 22:25:27.998826] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:33.006 [2024-07-24 22:25:27.998838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.006 [2024-07-24 22:25:27.998844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.006 [2024-07-24 22:25:27.998850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.006 [2024-07-24 22:25:27.998856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.006 [2024-07-24 22:25:27.998863] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:27.998867] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:27.998872] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.006 [2024-07-24 22:25:27.998879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.006 [2024-07-24 22:25:27.998893] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.006 [2024-07-24 22:25:27.999040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.006 [2024-07-24 22:25:28.003056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.006 [2024-07-24 22:25:28.003060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:28.003064] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.006 [2024-07-24 22:25:28.003072] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:28.003076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:28.003079] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.006 [2024-07-24 22:25:28.003086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.006 [2024-07-24 22:25:28.003104] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.006 [2024-07-24 22:25:28.003354] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.006 [2024-07-24 22:25:28.003363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.006 [2024-07-24 22:25:28.003366] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:28.003370] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.006 [2024-07-24 22:25:28.003375] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:33.006 [2024-07-24 22:25:28.003379] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:33.006 [2024-07-24 22:25:28.003389] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:28.003393] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:28.003396] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.006 [2024-07-24 22:25:28.003403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.006 [2024-07-24 22:25:28.003418] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.006 [2024-07-24 22:25:28.003568] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.006 [2024-07-24 22:25:28.003578] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.006 [2024-07-24 22:25:28.003581] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:28.003585] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.006 [2024-07-24 22:25:28.003596] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:28.003600] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.006 [2024-07-24 22:25:28.003603] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.006 [2024-07-24 22:25:28.003610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.006 [2024-07-24 22:25:28.003621] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.006 [2024-07-24 22:25:28.003766] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.006 [2024-07-24 22:25:28.003776] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.006 [2024-07-24 22:25:28.003779] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.003782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.003793] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.003797] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.003800] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.003807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.003819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.003972] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.003981] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.003984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.003988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.003999] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004002] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.004012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.004024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.004167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.004177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.004180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.004195] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004198] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004201] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.004208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.004223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.004372] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.004381] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.004385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004388] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.004399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004406] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.004413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.004425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.004576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.004586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.004589] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004592] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.004603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004607] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004610] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.004616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.004628] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.004815] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.004824] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.004827] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004831] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.004842] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004846] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.004849] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.004855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.004868] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.005062] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.005072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.005075] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005078] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.005089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005093] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005096] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.005103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.005116] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.005300] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.005310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.005313] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005317] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.005327] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005331] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005334] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.005341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.005353] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.005498] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.005508] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.005511] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005514] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.005525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005532] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.005539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.005550] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.005737] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.005746] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.005749] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005753] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.005764] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.005777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.005789] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.005975] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.005984] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.005987] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.005991] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.006001] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.006005] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.006008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.006014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.006026] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.006411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.006419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.007 [2024-07-24 22:25:28.006423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.006426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.007 [2024-07-24 22:25:28.006435] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.006438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.007 [2024-07-24 22:25:28.006441] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.007 [2024-07-24 22:25:28.006447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.007 [2024-07-24 22:25:28.006458] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.007 [2024-07-24 22:25:28.006603] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.007 [2024-07-24 22:25:28.006612] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.008 [2024-07-24 22:25:28.006615] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.006618] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.008 [2024-07-24 22:25:28.006629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.006633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.006636] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.008 [2024-07-24 22:25:28.006642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.008 [2024-07-24 22:25:28.006654] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.008 [2024-07-24 22:25:28.006803] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.008 [2024-07-24 22:25:28.006812] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.008 [2024-07-24 22:25:28.006815] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.006819] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.008 [2024-07-24 22:25:28.006830] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.006833] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.006836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.008 [2024-07-24 22:25:28.006843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.008 [2024-07-24 22:25:28.006855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.008 [2024-07-24 22:25:28.007005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.008 [2024-07-24 22:25:28.007014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.008 [2024-07-24 22:25:28.007017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.007021] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.008 [2024-07-24 22:25:28.007031] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.007035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.007038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb3cca0) 00:26:33.008 [2024-07-24 22:25:28.011049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.008 [2024-07-24 22:25:28.011066] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb913a0, cid 3, qid 0 00:26:33.008 [2024-07-24 22:25:28.011317] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:33.008 [2024-07-24 22:25:28.011327] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:33.008 [2024-07-24 22:25:28.011333] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:33.008 [2024-07-24 22:25:28.011337] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xb913a0) on tqpair=0xb3cca0 00:26:33.008 [2024-07-24 22:25:28.011346] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:26:33.008 0 Kelvin (-273 Celsius) 00:26:33.008 Available Spare: 0% 00:26:33.008 Available Spare Threshold: 0% 00:26:33.008 Life Percentage Used: 0% 00:26:33.008 Data Units Read: 0 00:26:33.008 Data Units Written: 0 00:26:33.008 Host Read Commands: 0 00:26:33.008 Host Write Commands: 0 00:26:33.008 Controller Busy Time: 0 minutes 00:26:33.008 Power Cycles: 0 00:26:33.008 Power On Hours: 0 hours 00:26:33.008 Unsafe Shutdowns: 0 00:26:33.008 Unrecoverable Media Errors: 0 00:26:33.008 Lifetime Error Log Entries: 0 00:26:33.008 Warning Temperature Time: 0 minutes 00:26:33.008 Critical Temperature Time: 0 minutes 00:26:33.008 00:26:33.008 Number of Queues 00:26:33.008 ================ 00:26:33.008 Number of I/O Submission Queues: 127 00:26:33.008 Number of I/O Completion Queues: 127 00:26:33.008 00:26:33.008 Active Namespaces 00:26:33.008 ================= 00:26:33.008 Namespace ID:1 00:26:33.008 Error Recovery Timeout: Unlimited 00:26:33.008 Command Set Identifier: NVM (00h) 00:26:33.008 Deallocate: Supported 00:26:33.008 Deallocated/Unwritten Error: Not Supported 00:26:33.008 Deallocated Read Value: Unknown 00:26:33.008 Deallocate in Write Zeroes: Not Supported 00:26:33.008 Deallocated Guard Field: 0xFFFF 00:26:33.008 Flush: Supported 00:26:33.008 Reservation: Supported 00:26:33.008 Namespace Sharing Capabilities: Multiple Controllers 00:26:33.008 Size (in LBAs): 131072 (0GiB) 00:26:33.008 Capacity (in LBAs): 131072 (0GiB) 00:26:33.008 Utilization (in LBAs): 131072 (0GiB) 00:26:33.008 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:33.008 EUI64: ABCDEF0123456789 00:26:33.008 UUID: 77607751-7fba-4ea3-be23-da7a9d90c0be 00:26:33.008 Thin Provisioning: Not Supported 00:26:33.008 Per-NS Atomic Units: Yes 00:26:33.008 Atomic Boundary Size (Normal): 0 00:26:33.008 Atomic Boundary Size (PFail): 0 00:26:33.008 Atomic Boundary Offset: 0 00:26:33.008 Maximum Single Source Range Length: 65535 00:26:33.008 Maximum Copy Length: 65535 00:26:33.008 Maximum Source Range Count: 1 00:26:33.008 NGUID/EUI64 Never Reused: No 00:26:33.008 Namespace Write Protected: No 00:26:33.008 Number of LBA Formats: 1 00:26:33.008 Current LBA Format: LBA Format #00 00:26:33.008 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:33.008 00:26:33.008 22:25:28 -- host/identify.sh@51 -- # sync 00:26:33.008 22:25:28 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.008 22:25:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:33.008 22:25:28 -- common/autotest_common.sh@10 -- # set +x 00:26:33.008 22:25:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:33.008 22:25:28 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:33.008 22:25:28 -- host/identify.sh@56 -- # nvmftestfini 00:26:33.008 22:25:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:33.008 22:25:28 -- nvmf/common.sh@116 -- # sync 00:26:33.008 22:25:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:33.008 22:25:28 -- nvmf/common.sh@119 -- # set +e 00:26:33.008 22:25:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:33.008 22:25:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:33.008 rmmod nvme_tcp 00:26:33.008 rmmod nvme_fabrics 00:26:33.008 rmmod nvme_keyring 00:26:33.008 22:25:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:33.008 22:25:28 -- nvmf/common.sh@123 -- # set -e 00:26:33.008 22:25:28 -- nvmf/common.sh@124 -- # return 0 00:26:33.008 22:25:28 -- nvmf/common.sh@477 -- # '[' -n 3679321 ']' 00:26:33.008 22:25:28 -- nvmf/common.sh@478 -- # killprocess 3679321 00:26:33.008 22:25:28 -- common/autotest_common.sh@926 -- # '[' -z 3679321 ']' 00:26:33.008 22:25:28 -- common/autotest_common.sh@930 -- # kill -0 3679321 00:26:33.008 22:25:28 -- common/autotest_common.sh@931 -- # uname 00:26:33.008 22:25:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:33.008 22:25:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3679321 00:26:33.268 22:25:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:33.268 22:25:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:33.268 22:25:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3679321' 00:26:33.268 killing process with pid 3679321 00:26:33.268 22:25:28 -- common/autotest_common.sh@945 -- # kill 3679321 00:26:33.268 [2024-07-24 22:25:28.152982] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:33.268 22:25:28 -- common/autotest_common.sh@950 -- # wait 3679321 00:26:33.268 22:25:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:33.268 22:25:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:33.268 22:25:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:33.268 22:25:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:33.268 22:25:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:33.268 22:25:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.268 22:25:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.268 22:25:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.800 22:25:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:35.800 00:26:35.800 real 0m9.499s 00:26:35.800 user 0m7.593s 00:26:35.800 sys 0m4.722s 00:26:35.800 22:25:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.800 22:25:30 -- common/autotest_common.sh@10 -- # set +x 00:26:35.800 ************************************ 00:26:35.800 END TEST nvmf_identify 00:26:35.800 ************************************ 00:26:35.800 22:25:30 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:35.800 22:25:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:35.800 22:25:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:35.800 22:25:30 -- common/autotest_common.sh@10 -- # set +x 00:26:35.800 ************************************ 00:26:35.800 START TEST nvmf_perf 00:26:35.800 ************************************ 00:26:35.800 22:25:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:35.800 * Looking for test storage... 00:26:35.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.800 22:25:30 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.800 22:25:30 -- nvmf/common.sh@7 -- # uname -s 00:26:35.800 22:25:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.800 22:25:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.800 22:25:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.800 22:25:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.800 22:25:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.800 22:25:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.800 22:25:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.800 22:25:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.800 22:25:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.800 22:25:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.800 22:25:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:35.800 22:25:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:35.800 22:25:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.800 22:25:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.800 22:25:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.800 22:25:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.800 22:25:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.800 22:25:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.800 22:25:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.801 22:25:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.801 22:25:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.801 22:25:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.801 22:25:30 -- paths/export.sh@5 -- # export PATH 00:26:35.801 22:25:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.801 22:25:30 -- nvmf/common.sh@46 -- # : 0 00:26:35.801 22:25:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:35.801 22:25:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:35.801 22:25:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:35.801 22:25:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.801 22:25:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.801 22:25:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:35.801 22:25:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:35.801 22:25:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:35.801 22:25:30 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:35.801 22:25:30 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:35.801 22:25:30 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:35.801 22:25:30 -- host/perf.sh@17 -- # nvmftestinit 00:26:35.801 22:25:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:35.801 22:25:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.801 22:25:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:35.801 22:25:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:35.801 22:25:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:35.801 22:25:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.801 22:25:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:35.801 22:25:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.801 22:25:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:35.801 22:25:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:35.801 22:25:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:35.801 22:25:30 -- common/autotest_common.sh@10 -- # set +x 00:26:41.070 22:25:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:41.070 22:25:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:41.070 22:25:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:41.070 22:25:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:41.070 22:25:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:41.070 22:25:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:41.070 22:25:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:41.070 22:25:35 -- nvmf/common.sh@294 -- # net_devs=() 00:26:41.070 22:25:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:41.070 22:25:35 -- nvmf/common.sh@295 -- # e810=() 00:26:41.070 22:25:35 -- nvmf/common.sh@295 -- # local -ga e810 00:26:41.070 22:25:35 -- nvmf/common.sh@296 -- # x722=() 00:26:41.070 22:25:35 -- nvmf/common.sh@296 -- # local -ga x722 00:26:41.070 22:25:35 -- nvmf/common.sh@297 -- # mlx=() 00:26:41.070 22:25:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:41.070 22:25:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.070 22:25:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.070 22:25:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.070 22:25:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.070 22:25:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.070 22:25:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.070 22:25:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.070 22:25:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.071 22:25:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.071 22:25:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.071 22:25:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.071 22:25:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:41.071 22:25:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:41.071 22:25:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:41.071 22:25:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:41.071 22:25:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:41.071 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:41.071 22:25:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:41.071 22:25:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:41.071 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:41.071 22:25:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:41.071 22:25:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:41.071 22:25:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.071 22:25:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:41.071 22:25:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.071 22:25:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:41.071 Found net devices under 0000:86:00.0: cvl_0_0 00:26:41.071 22:25:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.071 22:25:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:41.071 22:25:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.071 22:25:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:41.071 22:25:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.071 22:25:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:41.071 Found net devices under 0000:86:00.1: cvl_0_1 00:26:41.071 22:25:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.071 22:25:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:41.071 22:25:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:41.071 22:25:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:41.071 22:25:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.071 22:25:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.071 22:25:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.071 22:25:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:41.071 22:25:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.071 22:25:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.071 22:25:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:41.071 22:25:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.071 22:25:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.071 22:25:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:41.071 22:25:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:41.071 22:25:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.071 22:25:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.071 22:25:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.071 22:25:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.071 22:25:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:41.071 22:25:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.071 22:25:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.071 22:25:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.071 22:25:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:41.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:26:41.071 00:26:41.071 --- 10.0.0.2 ping statistics --- 00:26:41.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.071 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:26:41.071 22:25:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:26:41.071 00:26:41.071 --- 10.0.0.1 ping statistics --- 00:26:41.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.071 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:26:41.071 22:25:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.071 22:25:35 -- nvmf/common.sh@410 -- # return 0 00:26:41.071 22:25:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:41.071 22:25:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.071 22:25:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:41.071 22:25:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.071 22:25:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:41.071 22:25:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:41.071 22:25:35 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:41.071 22:25:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:41.071 22:25:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:41.071 22:25:35 -- common/autotest_common.sh@10 -- # set +x 00:26:41.071 22:25:35 -- nvmf/common.sh@469 -- # nvmfpid=3682879 00:26:41.071 22:25:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:41.071 22:25:35 -- nvmf/common.sh@470 -- # waitforlisten 3682879 00:26:41.071 22:25:35 -- common/autotest_common.sh@819 -- # '[' -z 3682879 ']' 00:26:41.071 22:25:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.071 22:25:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:41.071 22:25:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.071 22:25:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:41.071 22:25:35 -- common/autotest_common.sh@10 -- # set +x 00:26:41.071 [2024-07-24 22:25:35.699749] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:26:41.071 [2024-07-24 22:25:35.699793] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.071 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.071 [2024-07-24 22:25:35.753009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.071 [2024-07-24 22:25:35.793272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:41.071 [2024-07-24 22:25:35.793384] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.071 [2024-07-24 22:25:35.793392] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.071 [2024-07-24 22:25:35.793398] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.071 [2024-07-24 22:25:35.793439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.071 [2024-07-24 22:25:35.793536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.071 [2024-07-24 22:25:35.793602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.071 [2024-07-24 22:25:35.793603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.638 22:25:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:41.638 22:25:36 -- common/autotest_common.sh@852 -- # return 0 00:26:41.638 22:25:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:41.638 22:25:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:41.638 22:25:36 -- common/autotest_common.sh@10 -- # set +x 00:26:41.638 22:25:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.638 22:25:36 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:41.638 22:25:36 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:26:44.921 22:25:39 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:44.921 22:25:39 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:26:44.921 22:25:39 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:26:44.921 22:25:39 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:44.921 22:25:39 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:44.921 22:25:39 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:26:44.921 22:25:39 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:44.921 22:25:39 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:44.921 22:25:39 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:45.179 [2024-07-24 22:25:40.071923] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.179 22:25:40 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.179 22:25:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:45.179 22:25:40 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:45.436 22:25:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:45.436 22:25:40 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:45.694 22:25:40 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.694 [2024-07-24 22:25:40.790697] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.694 22:25:40 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:45.951 22:25:40 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:26:45.951 22:25:40 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:45.951 22:25:40 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:45.951 22:25:40 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:26:47.355 Initializing NVMe Controllers 00:26:47.355 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:26:47.355 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:26:47.355 Initialization complete. Launching workers. 00:26:47.355 ======================================================== 00:26:47.355 Latency(us) 00:26:47.355 Device Information : IOPS MiB/s Average min max 00:26:47.355 PCIE (0000:5e:00.0) NSID 1 from core 0: 98896.85 386.32 323.21 34.86 4310.34 00:26:47.355 ======================================================== 00:26:47.355 Total : 98896.85 386.32 323.21 34.86 4310.34 00:26:47.355 00:26:47.355 22:25:42 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:47.355 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.729 Initializing NVMe Controllers 00:26:48.729 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:48.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:48.729 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:48.729 Initialization complete. Launching workers. 00:26:48.729 ======================================================== 00:26:48.729 Latency(us) 00:26:48.729 Device Information : IOPS MiB/s Average min max 00:26:48.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 114.00 0.45 8960.20 575.38 45690.45 00:26:48.729 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.00 0.26 15234.78 7811.85 47904.64 00:26:48.729 ======================================================== 00:26:48.729 Total : 180.00 0.70 11260.88 575.38 47904.64 00:26:48.729 00:26:48.729 22:25:43 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:48.729 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.663 Initializing NVMe Controllers 00:26:49.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:49.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:49.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:49.663 Initialization complete. Launching workers. 00:26:49.663 ======================================================== 00:26:49.663 Latency(us) 00:26:49.663 Device Information : IOPS MiB/s Average min max 00:26:49.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7783.70 30.41 4111.56 817.76 8432.91 00:26:49.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3867.44 15.11 8291.52 5611.56 15858.06 00:26:49.663 ======================================================== 00:26:49.663 Total : 11651.15 45.51 5499.04 817.76 15858.06 00:26:49.663 00:26:49.663 22:25:44 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:49.663 22:25:44 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:49.663 22:25:44 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:49.922 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.456 Initializing NVMe Controllers 00:26:52.456 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.456 Controller IO queue size 128, less than required. 00:26:52.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.456 Controller IO queue size 128, less than required. 00:26:52.456 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:52.456 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:52.456 Initialization complete. Launching workers. 00:26:52.456 ======================================================== 00:26:52.457 Latency(us) 00:26:52.457 Device Information : IOPS MiB/s Average min max 00:26:52.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 806.91 201.73 163866.85 100582.30 228175.07 00:26:52.457 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 586.43 146.61 229735.11 100998.13 347629.19 00:26:52.457 ======================================================== 00:26:52.457 Total : 1393.34 348.34 191589.66 100582.30 347629.19 00:26:52.457 00:26:52.457 22:25:47 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:52.457 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.715 No valid NVMe controllers or AIO or URING devices found 00:26:52.715 Initializing NVMe Controllers 00:26:52.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:52.715 Controller IO queue size 128, less than required. 00:26:52.715 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.715 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:52.715 Controller IO queue size 128, less than required. 00:26:52.715 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:52.715 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:52.715 WARNING: Some requested NVMe devices were skipped 00:26:52.715 22:25:47 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:52.715 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.250 Initializing NVMe Controllers 00:26:55.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.250 Controller IO queue size 128, less than required. 00:26:55.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:55.250 Controller IO queue size 128, less than required. 00:26:55.250 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:55.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:55.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:55.250 Initialization complete. Launching workers. 00:26:55.250 00:26:55.250 ==================== 00:26:55.250 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:55.250 TCP transport: 00:26:55.250 polls: 50596 00:26:55.250 idle_polls: 15358 00:26:55.250 sock_completions: 35238 00:26:55.250 nvme_completions: 3127 00:26:55.250 submitted_requests: 4891 00:26:55.250 queued_requests: 1 00:26:55.250 00:26:55.250 ==================== 00:26:55.250 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:55.250 TCP transport: 00:26:55.250 polls: 50892 00:26:55.250 idle_polls: 16585 00:26:55.250 sock_completions: 34307 00:26:55.250 nvme_completions: 3072 00:26:55.250 submitted_requests: 4760 00:26:55.250 queued_requests: 1 00:26:55.250 ======================================================== 00:26:55.250 Latency(us) 00:26:55.250 Device Information : IOPS MiB/s Average min max 00:26:55.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 845.50 211.37 156887.12 96028.19 226097.30 00:26:55.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 831.50 207.87 159745.59 87566.10 290334.61 00:26:55.250 ======================================================== 00:26:55.250 Total : 1676.99 419.25 158304.42 87566.10 290334.61 00:26:55.250 00:26:55.250 22:25:50 -- host/perf.sh@66 -- # sync 00:26:55.250 22:25:50 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.509 22:25:50 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:55.509 22:25:50 -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:26:55.509 22:25:50 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:58.794 22:25:53 -- host/perf.sh@72 -- # ls_guid=b3f1cc7a-9097-4603-b731-e52756d1d02d 00:26:58.794 22:25:53 -- host/perf.sh@73 -- # get_lvs_free_mb b3f1cc7a-9097-4603-b731-e52756d1d02d 00:26:58.794 22:25:53 -- common/autotest_common.sh@1343 -- # local lvs_uuid=b3f1cc7a-9097-4603-b731-e52756d1d02d 00:26:58.794 22:25:53 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:58.794 22:25:53 -- common/autotest_common.sh@1345 -- # local fc 00:26:58.794 22:25:53 -- common/autotest_common.sh@1346 -- # local cs 00:26:58.794 22:25:53 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:58.794 22:25:53 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:58.794 { 00:26:58.795 "uuid": "b3f1cc7a-9097-4603-b731-e52756d1d02d", 00:26:58.795 "name": "lvs_0", 00:26:58.795 "base_bdev": "Nvme0n1", 00:26:58.795 "total_data_clusters": 238234, 00:26:58.795 "free_clusters": 238234, 00:26:58.795 "block_size": 512, 00:26:58.795 "cluster_size": 4194304 00:26:58.795 } 00:26:58.795 ]' 00:26:58.795 22:25:53 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="b3f1cc7a-9097-4603-b731-e52756d1d02d") .free_clusters' 00:26:58.795 22:25:53 -- common/autotest_common.sh@1348 -- # fc=238234 00:26:58.795 22:25:53 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="b3f1cc7a-9097-4603-b731-e52756d1d02d") .cluster_size' 00:26:59.052 22:25:53 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:59.052 22:25:53 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:26:59.052 22:25:53 -- common/autotest_common.sh@1353 -- # echo 952936 00:26:59.052 952936 00:26:59.052 22:25:53 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:26:59.052 22:25:53 -- host/perf.sh@78 -- # free_mb=20480 00:26:59.052 22:25:53 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b3f1cc7a-9097-4603-b731-e52756d1d02d lbd_0 20480 00:26:59.618 22:25:54 -- host/perf.sh@80 -- # lb_guid=e2671c17-0124-42bf-b8bb-8af5509709ae 00:26:59.618 22:25:54 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore e2671c17-0124-42bf-b8bb-8af5509709ae lvs_n_0 00:27:00.184 22:25:55 -- host/perf.sh@83 -- # ls_nested_guid=94bbb803-c777-4285-af39-f852f3adcad0 00:27:00.184 22:25:55 -- host/perf.sh@84 -- # get_lvs_free_mb 94bbb803-c777-4285-af39-f852f3adcad0 00:27:00.184 22:25:55 -- common/autotest_common.sh@1343 -- # local lvs_uuid=94bbb803-c777-4285-af39-f852f3adcad0 00:27:00.184 22:25:55 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:00.184 22:25:55 -- common/autotest_common.sh@1345 -- # local fc 00:27:00.184 22:25:55 -- common/autotest_common.sh@1346 -- # local cs 00:27:00.184 22:25:55 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:00.444 22:25:55 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:00.444 { 00:27:00.444 "uuid": "b3f1cc7a-9097-4603-b731-e52756d1d02d", 00:27:00.444 "name": "lvs_0", 00:27:00.444 "base_bdev": "Nvme0n1", 00:27:00.444 "total_data_clusters": 238234, 00:27:00.444 "free_clusters": 233114, 00:27:00.444 "block_size": 512, 00:27:00.444 "cluster_size": 4194304 00:27:00.444 }, 00:27:00.444 { 00:27:00.444 "uuid": "94bbb803-c777-4285-af39-f852f3adcad0", 00:27:00.444 "name": "lvs_n_0", 00:27:00.444 "base_bdev": "e2671c17-0124-42bf-b8bb-8af5509709ae", 00:27:00.444 "total_data_clusters": 5114, 00:27:00.444 "free_clusters": 5114, 00:27:00.444 "block_size": 512, 00:27:00.444 "cluster_size": 4194304 00:27:00.444 } 00:27:00.444 ]' 00:27:00.444 22:25:55 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="94bbb803-c777-4285-af39-f852f3adcad0") .free_clusters' 00:27:00.444 22:25:55 -- common/autotest_common.sh@1348 -- # fc=5114 00:27:00.444 22:25:55 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="94bbb803-c777-4285-af39-f852f3adcad0") .cluster_size' 00:27:00.444 22:25:55 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:00.444 22:25:55 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:27:00.444 22:25:55 -- common/autotest_common.sh@1353 -- # echo 20456 00:27:00.444 20456 00:27:00.444 22:25:55 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:27:00.444 22:25:55 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 94bbb803-c777-4285-af39-f852f3adcad0 lbd_nest_0 20456 00:27:00.703 22:25:55 -- host/perf.sh@88 -- # lb_nested_guid=2bc754db-8b11-43ec-b823-6e5a4d840659 00:27:00.703 22:25:55 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:00.703 22:25:55 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:27:00.703 22:25:55 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2bc754db-8b11-43ec-b823-6e5a4d840659 00:27:00.961 22:25:55 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.220 22:25:56 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:27:01.220 22:25:56 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:27:01.220 22:25:56 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:01.220 22:25:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:01.220 22:25:56 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.220 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.422 Initializing NVMe Controllers 00:27:13.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:13.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:13.422 Initialization complete. Launching workers. 00:27:13.422 ======================================================== 00:27:13.422 Latency(us) 00:27:13.422 Device Information : IOPS MiB/s Average min max 00:27:13.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 41.59 0.02 24140.07 385.10 49392.56 00:27:13.422 ======================================================== 00:27:13.422 Total : 41.59 0.02 24140.07 385.10 49392.56 00:27:13.422 00:27:13.422 22:26:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:13.422 22:26:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:13.422 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.425 Initializing NVMe Controllers 00:27:23.425 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:23.425 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:23.425 Initialization complete. Launching workers. 00:27:23.425 ======================================================== 00:27:23.425 Latency(us) 00:27:23.425 Device Information : IOPS MiB/s Average min max 00:27:23.425 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 83.20 10.40 12027.85 5882.44 22899.34 00:27:23.425 ======================================================== 00:27:23.425 Total : 83.20 10.40 12027.85 5882.44 22899.34 00:27:23.425 00:27:23.425 22:26:16 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:23.425 22:26:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:23.425 22:26:16 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:23.425 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.404 Initializing NVMe Controllers 00:27:33.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:33.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:33.404 Initialization complete. Launching workers. 00:27:33.404 ======================================================== 00:27:33.404 Latency(us) 00:27:33.404 Device Information : IOPS MiB/s Average min max 00:27:33.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6645.28 3.24 4816.38 521.22 13025.89 00:27:33.404 ======================================================== 00:27:33.404 Total : 6645.28 3.24 4816.38 521.22 13025.89 00:27:33.404 00:27:33.404 22:26:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:33.404 22:26:27 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:33.404 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.383 Initializing NVMe Controllers 00:27:43.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:43.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:43.383 Initialization complete. Launching workers. 00:27:43.383 ======================================================== 00:27:43.383 Latency(us) 00:27:43.384 Device Information : IOPS MiB/s Average min max 00:27:43.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1461.60 182.70 21922.66 2220.75 54186.84 00:27:43.384 ======================================================== 00:27:43.384 Total : 1461.60 182.70 21922.66 2220.75 54186.84 00:27:43.384 00:27:43.384 22:26:37 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:27:43.384 22:26:37 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:43.384 22:26:37 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:43.384 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.364 Initializing NVMe Controllers 00:27:53.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:53.364 Controller IO queue size 128, less than required. 00:27:53.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:53.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:53.364 Initialization complete. Launching workers. 00:27:53.364 ======================================================== 00:27:53.364 Latency(us) 00:27:53.364 Device Information : IOPS MiB/s Average min max 00:27:53.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14206.97 6.94 9009.73 1337.78 22246.44 00:27:53.364 ======================================================== 00:27:53.364 Total : 14206.97 6.94 9009.73 1337.78 22246.44 00:27:53.364 00:27:53.364 22:26:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:53.364 22:26:47 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.364 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.382 Initializing NVMe Controllers 00:28:03.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:03.382 Controller IO queue size 128, less than required. 00:28:03.382 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:03.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:03.382 Initialization complete. Launching workers. 00:28:03.382 ======================================================== 00:28:03.382 Latency(us) 00:28:03.382 Device Information : IOPS MiB/s Average min max 00:28:03.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1142.20 142.77 112559.84 23827.22 229154.83 00:28:03.382 ======================================================== 00:28:03.382 Total : 1142.20 142.77 112559.84 23827.22 229154.83 00:28:03.382 00:28:03.382 22:26:58 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.382 22:26:58 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2bc754db-8b11-43ec-b823-6e5a4d840659 00:28:03.949 22:26:59 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:04.207 22:26:59 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e2671c17-0124-42bf-b8bb-8af5509709ae 00:28:04.466 22:26:59 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:04.725 22:26:59 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:28:04.725 22:26:59 -- host/perf.sh@114 -- # nvmftestfini 00:28:04.725 22:26:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:04.725 22:26:59 -- nvmf/common.sh@116 -- # sync 00:28:04.725 22:26:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:04.725 22:26:59 -- nvmf/common.sh@119 -- # set +e 00:28:04.725 22:26:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:04.725 22:26:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:04.725 rmmod nvme_tcp 00:28:04.725 rmmod nvme_fabrics 00:28:04.725 rmmod nvme_keyring 00:28:04.725 22:26:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:04.725 22:26:59 -- nvmf/common.sh@123 -- # set -e 00:28:04.725 22:26:59 -- nvmf/common.sh@124 -- # return 0 00:28:04.725 22:26:59 -- nvmf/common.sh@477 -- # '[' -n 3682879 ']' 00:28:04.725 22:26:59 -- nvmf/common.sh@478 -- # killprocess 3682879 00:28:04.725 22:26:59 -- common/autotest_common.sh@926 -- # '[' -z 3682879 ']' 00:28:04.725 22:26:59 -- common/autotest_common.sh@930 -- # kill -0 3682879 00:28:04.725 22:26:59 -- common/autotest_common.sh@931 -- # uname 00:28:04.725 22:26:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:04.725 22:26:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3682879 00:28:04.725 22:26:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:04.725 22:26:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:04.725 22:26:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3682879' 00:28:04.725 killing process with pid 3682879 00:28:04.725 22:26:59 -- common/autotest_common.sh@945 -- # kill 3682879 00:28:04.725 22:26:59 -- common/autotest_common.sh@950 -- # wait 3682879 00:28:06.102 22:27:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:06.102 22:27:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:06.102 22:27:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:06.102 22:27:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.102 22:27:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:06.102 22:27:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.102 22:27:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.102 22:27:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.638 22:27:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:08.638 00:28:08.638 real 1m32.809s 00:28:08.638 user 5m36.580s 00:28:08.638 sys 0m13.296s 00:28:08.638 22:27:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:08.638 22:27:03 -- common/autotest_common.sh@10 -- # set +x 00:28:08.638 ************************************ 00:28:08.638 END TEST nvmf_perf 00:28:08.638 ************************************ 00:28:08.638 22:27:03 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:08.638 22:27:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:08.638 22:27:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:08.638 22:27:03 -- common/autotest_common.sh@10 -- # set +x 00:28:08.638 ************************************ 00:28:08.638 START TEST nvmf_fio_host 00:28:08.638 ************************************ 00:28:08.638 22:27:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:28:08.638 * Looking for test storage... 00:28:08.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:08.638 22:27:03 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.638 22:27:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.638 22:27:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.638 22:27:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.638 22:27:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.638 22:27:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.638 22:27:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.638 22:27:03 -- paths/export.sh@5 -- # export PATH 00:28:08.638 22:27:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.638 22:27:03 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:08.638 22:27:03 -- nvmf/common.sh@7 -- # uname -s 00:28:08.638 22:27:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:08.638 22:27:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:08.638 22:27:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:08.638 22:27:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:08.638 22:27:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:08.638 22:27:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:08.638 22:27:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:08.638 22:27:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:08.638 22:27:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:08.638 22:27:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:08.638 22:27:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:08.638 22:27:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:08.638 22:27:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:08.638 22:27:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:08.638 22:27:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:08.638 22:27:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:08.638 22:27:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:08.638 22:27:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:08.638 22:27:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:08.638 22:27:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.638 22:27:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.638 22:27:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.638 22:27:03 -- paths/export.sh@5 -- # export PATH 00:28:08.638 22:27:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:08.638 22:27:03 -- nvmf/common.sh@46 -- # : 0 00:28:08.638 22:27:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:08.638 22:27:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:08.638 22:27:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:08.638 22:27:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:08.638 22:27:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:08.638 22:27:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:08.638 22:27:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:08.638 22:27:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:08.638 22:27:03 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:08.638 22:27:03 -- host/fio.sh@14 -- # nvmftestinit 00:28:08.638 22:27:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:08.638 22:27:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.638 22:27:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:08.638 22:27:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:08.638 22:27:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:08.638 22:27:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.638 22:27:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:08.638 22:27:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.638 22:27:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:08.638 22:27:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:08.638 22:27:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:08.638 22:27:03 -- common/autotest_common.sh@10 -- # set +x 00:28:13.911 22:27:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:13.911 22:27:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:13.911 22:27:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:13.911 22:27:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:13.911 22:27:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:13.911 22:27:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:13.911 22:27:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:13.911 22:27:08 -- nvmf/common.sh@294 -- # net_devs=() 00:28:13.911 22:27:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:13.911 22:27:08 -- nvmf/common.sh@295 -- # e810=() 00:28:13.911 22:27:08 -- nvmf/common.sh@295 -- # local -ga e810 00:28:13.911 22:27:08 -- nvmf/common.sh@296 -- # x722=() 00:28:13.911 22:27:08 -- nvmf/common.sh@296 -- # local -ga x722 00:28:13.911 22:27:08 -- nvmf/common.sh@297 -- # mlx=() 00:28:13.911 22:27:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:13.911 22:27:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.911 22:27:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:13.911 22:27:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:13.911 22:27:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:13.911 22:27:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:13.911 22:27:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:13.911 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:13.911 22:27:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:13.911 22:27:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:13.911 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:13.911 22:27:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:13.911 22:27:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:13.911 22:27:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.911 22:27:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:13.911 22:27:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.911 22:27:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:13.911 Found net devices under 0000:86:00.0: cvl_0_0 00:28:13.911 22:27:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.911 22:27:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:13.911 22:27:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.911 22:27:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:13.911 22:27:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.911 22:27:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:13.911 Found net devices under 0000:86:00.1: cvl_0_1 00:28:13.911 22:27:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.911 22:27:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:13.911 22:27:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:13.911 22:27:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:13.911 22:27:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:13.911 22:27:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.911 22:27:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.911 22:27:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.911 22:27:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:13.911 22:27:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.911 22:27:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.911 22:27:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:13.911 22:27:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.911 22:27:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.911 22:27:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:13.911 22:27:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:13.911 22:27:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.911 22:27:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.911 22:27:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.911 22:27:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.911 22:27:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:13.911 22:27:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.911 22:27:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.911 22:27:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.170 22:27:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:14.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:28:14.170 00:28:14.170 --- 10.0.0.2 ping statistics --- 00:28:14.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.170 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:28:14.170 22:27:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:28:14.170 00:28:14.171 --- 10.0.0.1 ping statistics --- 00:28:14.171 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.171 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:28:14.171 22:27:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.171 22:27:09 -- nvmf/common.sh@410 -- # return 0 00:28:14.171 22:27:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:14.171 22:27:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.171 22:27:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:14.171 22:27:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:14.171 22:27:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.171 22:27:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:14.171 22:27:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:14.171 22:27:09 -- host/fio.sh@16 -- # [[ y != y ]] 00:28:14.171 22:27:09 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:28:14.171 22:27:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:14.171 22:27:09 -- common/autotest_common.sh@10 -- # set +x 00:28:14.171 22:27:09 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:14.171 22:27:09 -- host/fio.sh@24 -- # nvmfpid=3700748 00:28:14.171 22:27:09 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:14.171 22:27:09 -- host/fio.sh@28 -- # waitforlisten 3700748 00:28:14.171 22:27:09 -- common/autotest_common.sh@819 -- # '[' -z 3700748 ']' 00:28:14.171 22:27:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.171 22:27:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:14.171 22:27:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.171 22:27:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:14.171 22:27:09 -- common/autotest_common.sh@10 -- # set +x 00:28:14.171 [2024-07-24 22:27:09.130324] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:14.171 [2024-07-24 22:27:09.130370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.171 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.171 [2024-07-24 22:27:09.188383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:14.171 [2024-07-24 22:27:09.228907] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:14.171 [2024-07-24 22:27:09.229019] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.171 [2024-07-24 22:27:09.229028] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.171 [2024-07-24 22:27:09.229066] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.171 [2024-07-24 22:27:09.229116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:14.171 [2024-07-24 22:27:09.229214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:14.171 [2024-07-24 22:27:09.229281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.171 [2024-07-24 22:27:09.229282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.107 22:27:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:15.107 22:27:09 -- common/autotest_common.sh@852 -- # return 0 00:28:15.107 22:27:09 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:15.107 [2024-07-24 22:27:10.116999] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.107 22:27:10 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:28:15.107 22:27:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:15.107 22:27:10 -- common/autotest_common.sh@10 -- # set +x 00:28:15.107 22:27:10 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:28:15.366 Malloc1 00:28:15.366 22:27:10 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:15.624 22:27:10 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:15.624 22:27:10 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:15.883 [2024-07-24 22:27:10.891392] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:15.883 22:27:10 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:16.141 22:27:11 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:16.141 22:27:11 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:16.141 22:27:11 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:16.141 22:27:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:16.141 22:27:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:16.141 22:27:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:16.141 22:27:11 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.141 22:27:11 -- common/autotest_common.sh@1320 -- # shift 00:28:16.141 22:27:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:16.141 22:27:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.141 22:27:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.141 22:27:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:16.141 22:27:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:16.141 22:27:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:16.141 22:27:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:16.141 22:27:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:16.141 22:27:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:16.141 22:27:11 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:16.141 22:27:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:16.141 22:27:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:16.141 22:27:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:16.141 22:27:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:16.141 22:27:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:16.400 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:16.400 fio-3.35 00:28:16.400 Starting 1 thread 00:28:16.400 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.933 00:28:18.933 test: (groupid=0, jobs=1): err= 0: pid=3701369: Wed Jul 24 22:27:13 2024 00:28:18.933 read: IOPS=12.1k, BW=47.2MiB/s (49.4MB/s)(94.5MiB/2004msec) 00:28:18.933 slat (nsec): min=1618, max=241635, avg=1779.88, stdev=2177.37 00:28:18.933 clat (usec): min=3304, max=24632, avg=6203.78, stdev=1459.89 00:28:18.933 lat (usec): min=3306, max=24654, avg=6205.56, stdev=1460.26 00:28:18.933 clat percentiles (usec): 00:28:18.933 | 1.00th=[ 4080], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5342], 00:28:18.933 | 30.00th=[ 5604], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6194], 00:28:18.933 | 70.00th=[ 6456], 80.00th=[ 6783], 90.00th=[ 7504], 95.00th=[ 8160], 00:28:18.933 | 99.00th=[11338], 99.50th=[13304], 99.90th=[24249], 99.95th=[24511], 00:28:18.933 | 99.99th=[24511] 00:28:18.933 bw ( KiB/s): min=45528, max=49440, per=99.89%, avg=48236.00, stdev=1848.73, samples=4 00:28:18.933 iops : min=11382, max=12360, avg=12059.00, stdev=462.18, samples=4 00:28:18.933 write: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(94.1MiB/2004msec); 0 zone resets 00:28:18.933 slat (nsec): min=1669, max=225220, avg=1862.34, stdev=1609.80 00:28:18.933 clat (usec): min=1925, max=16786, avg=4381.96, stdev=939.67 00:28:18.933 lat (usec): min=1927, max=16812, avg=4383.82, stdev=940.09 00:28:18.933 clat percentiles (usec): 00:28:18.933 | 1.00th=[ 2737], 5.00th=[ 3163], 10.00th=[ 3425], 20.00th=[ 3752], 00:28:18.933 | 30.00th=[ 4015], 40.00th=[ 4228], 50.00th=[ 4359], 60.00th=[ 4555], 00:28:18.933 | 70.00th=[ 4686], 80.00th=[ 4883], 90.00th=[ 5145], 95.00th=[ 5473], 00:28:18.933 | 99.00th=[ 7111], 99.50th=[ 9241], 99.90th=[16450], 99.95th=[16712], 00:28:18.933 | 99.99th=[16712] 00:28:18.933 bw ( KiB/s): min=45832, max=49192, per=99.98%, avg=48092.00, stdev=1553.30, samples=4 00:28:18.933 iops : min=11458, max=12298, avg=12023.00, stdev=388.32, samples=4 00:28:18.933 lat (msec) : 2=0.01%, 4=15.35%, 10=83.52%, 20=1.05%, 50=0.07% 00:28:18.933 cpu : usr=70.44%, sys=23.66%, ctx=29, majf=0, minf=29 00:28:18.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:28:18.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:18.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:18.933 issued rwts: total=24193,24098,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:18.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:18.933 00:28:18.933 Run status group 0 (all jobs): 00:28:18.933 READ: bw=47.2MiB/s (49.4MB/s), 47.2MiB/s-47.2MiB/s (49.4MB/s-49.4MB/s), io=94.5MiB (99.1MB), run=2004-2004msec 00:28:18.933 WRITE: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=94.1MiB (98.7MB), run=2004-2004msec 00:28:18.933 22:27:13 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:18.933 22:27:13 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:18.933 22:27:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:18.933 22:27:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:18.933 22:27:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:18.933 22:27:13 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:18.933 22:27:13 -- common/autotest_common.sh@1320 -- # shift 00:28:18.933 22:27:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:18.933 22:27:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:18.933 22:27:13 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:18.933 22:27:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:18.933 22:27:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:18.933 22:27:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:18.933 22:27:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:18.933 22:27:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:18.933 22:27:13 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:18.933 22:27:13 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:18.933 22:27:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:18.933 22:27:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:18.933 22:27:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:18.933 22:27:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:18.933 22:27:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:28:19.192 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:28:19.192 fio-3.35 00:28:19.192 Starting 1 thread 00:28:19.192 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.728 00:28:21.728 test: (groupid=0, jobs=1): err= 0: pid=3701938: Wed Jul 24 22:27:16 2024 00:28:21.728 read: IOPS=6552, BW=102MiB/s (107MB/s)(206MiB/2016msec) 00:28:21.728 slat (nsec): min=2543, max=86466, avg=2937.28, stdev=1500.35 00:28:21.728 clat (usec): min=3302, max=53718, avg=12182.80, stdev=6171.19 00:28:21.728 lat (usec): min=3305, max=53721, avg=12185.74, stdev=6171.46 00:28:21.728 clat percentiles (usec): 00:28:21.728 | 1.00th=[ 4555], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7767], 00:28:21.728 | 30.00th=[ 8717], 40.00th=[ 9896], 50.00th=[11600], 60.00th=[12780], 00:28:21.728 | 70.00th=[13829], 80.00th=[15008], 90.00th=[17171], 95.00th=[20579], 00:28:21.728 | 99.00th=[47449], 99.50th=[49021], 99.90th=[51119], 99.95th=[53216], 00:28:21.728 | 99.99th=[53740] 00:28:21.728 bw ( KiB/s): min=36032, max=81312, per=50.55%, avg=53000.00, stdev=19959.30, samples=4 00:28:21.728 iops : min= 2252, max= 5082, avg=3312.50, stdev=1247.46, samples=4 00:28:21.728 write: IOPS=3862, BW=60.3MiB/s (63.3MB/s)(109MiB/1808msec); 0 zone resets 00:28:21.728 slat (usec): min=30, max=235, avg=32.44, stdev= 6.46 00:28:21.728 clat (usec): min=5252, max=54616, avg=12825.97, stdev=5450.73 00:28:21.728 lat (usec): min=5283, max=54647, avg=12858.41, stdev=5452.47 00:28:21.728 clat percentiles (usec): 00:28:21.728 | 1.00th=[ 6325], 5.00th=[ 7177], 10.00th=[ 7701], 20.00th=[ 8717], 00:28:21.728 | 30.00th=[ 9503], 40.00th=[11076], 50.00th=[12518], 60.00th=[13566], 00:28:21.728 | 70.00th=[14484], 80.00th=[15401], 90.00th=[17171], 95.00th=[20579], 00:28:21.728 | 99.00th=[31851], 99.50th=[50594], 99.90th=[53740], 99.95th=[54264], 00:28:21.728 | 99.99th=[54789] 00:28:21.728 bw ( KiB/s): min=37536, max=84672, per=89.49%, avg=55304.00, stdev=20710.36, samples=4 00:28:21.728 iops : min= 2346, max= 5292, avg=3456.50, stdev=1294.40, samples=4 00:28:21.728 lat (msec) : 4=0.14%, 10=38.58%, 20=55.61%, 50=5.27%, 100=0.39% 00:28:21.728 cpu : usr=85.66%, sys=12.56%, ctx=94, majf=0, minf=43 00:28:21.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:28:21.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:21.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:21.728 issued rwts: total=13210,6983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:21.728 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:21.728 00:28:21.728 Run status group 0 (all jobs): 00:28:21.728 READ: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=206MiB (216MB), run=2016-2016msec 00:28:21.728 WRITE: bw=60.3MiB/s (63.3MB/s), 60.3MiB/s-60.3MiB/s (63.3MB/s-63.3MB/s), io=109MiB (114MB), run=1808-1808msec 00:28:21.728 22:27:16 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:21.728 22:27:16 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:28:21.728 22:27:16 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:28:21.728 22:27:16 -- host/fio.sh@51 -- # get_nvme_bdfs 00:28:21.728 22:27:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:28:21.728 22:27:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:28:21.728 22:27:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:21.728 22:27:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:21.728 22:27:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:28:21.728 22:27:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:28:21.728 22:27:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:28:21.728 22:27:16 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:28:25.069 Nvme0n1 00:28:25.069 22:27:19 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:28:27.600 22:27:22 -- host/fio.sh@53 -- # ls_guid=ce27f530-6289-4b17-b661-bffbf4bf97d3 00:28:27.600 22:27:22 -- host/fio.sh@54 -- # get_lvs_free_mb ce27f530-6289-4b17-b661-bffbf4bf97d3 00:28:27.600 22:27:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ce27f530-6289-4b17-b661-bffbf4bf97d3 00:28:27.600 22:27:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:27.600 22:27:22 -- common/autotest_common.sh@1345 -- # local fc 00:28:27.600 22:27:22 -- common/autotest_common.sh@1346 -- # local cs 00:28:27.600 22:27:22 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:27.858 22:27:22 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:27.858 { 00:28:27.858 "uuid": "ce27f530-6289-4b17-b661-bffbf4bf97d3", 00:28:27.858 "name": "lvs_0", 00:28:27.858 "base_bdev": "Nvme0n1", 00:28:27.858 "total_data_clusters": 930, 00:28:27.858 "free_clusters": 930, 00:28:27.858 "block_size": 512, 00:28:27.858 "cluster_size": 1073741824 00:28:27.858 } 00:28:27.858 ]' 00:28:27.858 22:27:22 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ce27f530-6289-4b17-b661-bffbf4bf97d3") .free_clusters' 00:28:27.858 22:27:22 -- common/autotest_common.sh@1348 -- # fc=930 00:28:27.858 22:27:22 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ce27f530-6289-4b17-b661-bffbf4bf97d3") .cluster_size' 00:28:27.858 22:27:22 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:28:27.858 22:27:22 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:28:27.858 22:27:22 -- common/autotest_common.sh@1353 -- # echo 952320 00:28:27.858 952320 00:28:27.858 22:27:22 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:28:28.117 a7322215-c93c-4f7a-8b12-2649d10b99f7 00:28:28.117 22:27:23 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:28:28.375 22:27:23 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:28:28.634 22:27:23 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:28.634 22:27:23 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:28.634 22:27:23 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:28.634 22:27:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:28.634 22:27:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:28.634 22:27:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:28.634 22:27:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:28.634 22:27:23 -- common/autotest_common.sh@1320 -- # shift 00:28:28.634 22:27:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:28.634 22:27:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:28.634 22:27:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:28.634 22:27:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:28.634 22:27:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:28.634 22:27:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:28.634 22:27:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:28.634 22:27:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:28.634 22:27:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:28.634 22:27:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:28.634 22:27:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:28.891 22:27:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:28.891 22:27:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:28.891 22:27:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:28.891 22:27:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:29.149 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:29.149 fio-3.35 00:28:29.149 Starting 1 thread 00:28:29.149 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.683 00:28:31.683 test: (groupid=0, jobs=1): err= 0: pid=3703716: Wed Jul 24 22:27:26 2024 00:28:31.683 read: IOPS=7940, BW=31.0MiB/s (32.5MB/s)(62.2MiB/2006msec) 00:28:31.683 slat (nsec): min=1570, max=112884, avg=1822.47, stdev=1238.46 00:28:31.683 clat (msec): min=2, max=178, avg= 9.25, stdev=10.83 00:28:31.683 lat (msec): min=2, max=178, avg= 9.25, stdev=10.83 00:28:31.683 clat percentiles (msec): 00:28:31.683 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:28:31.683 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:28:31.683 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 12], 95.00th=[ 13], 00:28:31.683 | 99.00th=[ 15], 99.50th=[ 18], 99.90th=[ 174], 99.95th=[ 176], 00:28:31.683 | 99.99th=[ 176] 00:28:31.683 bw ( KiB/s): min=21848, max=35456, per=99.87%, avg=31718.00, stdev=6588.92, samples=4 00:28:31.683 iops : min= 5462, max= 8864, avg=7929.50, stdev=1647.23, samples=4 00:28:31.683 write: IOPS=7915, BW=30.9MiB/s (32.4MB/s)(62.0MiB/2006msec); 0 zone resets 00:28:31.683 slat (nsec): min=1624, max=88983, avg=1900.37, stdev=885.68 00:28:31.683 clat (usec): min=722, max=172584, avg=6830.04, stdev=9885.96 00:28:31.683 lat (usec): min=723, max=172597, avg=6831.94, stdev=9886.15 00:28:31.683 clat percentiles (msec): 00:28:31.683 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:28:31.683 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 7], 00:28:31.683 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 9], 00:28:31.683 | 99.00th=[ 10], 99.50th=[ 11], 99.90th=[ 171], 99.95th=[ 171], 00:28:31.684 | 99.99th=[ 174] 00:28:31.684 bw ( KiB/s): min=22920, max=35320, per=99.93%, avg=31642.00, stdev=5842.34, samples=4 00:28:31.684 iops : min= 5730, max= 8830, avg=7910.50, stdev=1460.58, samples=4 00:28:31.684 lat (usec) : 750=0.01% 00:28:31.684 lat (msec) : 2=0.02%, 4=0.63%, 10=90.91%, 20=8.04%, 250=0.40% 00:28:31.684 cpu : usr=65.89%, sys=27.58%, ctx=38, majf=0, minf=32 00:28:31.684 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:31.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:31.684 issued rwts: total=15928,15879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.684 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:31.684 00:28:31.684 Run status group 0 (all jobs): 00:28:31.684 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=62.2MiB (65.2MB), run=2006-2006msec 00:28:31.684 WRITE: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=62.0MiB (65.0MB), run=2006-2006msec 00:28:31.684 22:27:26 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:31.684 22:27:26 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:28:32.620 22:27:27 -- host/fio.sh@64 -- # ls_nested_guid=2c70c968-4245-4e50-b350-64e82b8f75de 00:28:32.620 22:27:27 -- host/fio.sh@65 -- # get_lvs_free_mb 2c70c968-4245-4e50-b350-64e82b8f75de 00:28:32.620 22:27:27 -- common/autotest_common.sh@1343 -- # local lvs_uuid=2c70c968-4245-4e50-b350-64e82b8f75de 00:28:32.620 22:27:27 -- common/autotest_common.sh@1344 -- # local lvs_info 00:28:32.620 22:27:27 -- common/autotest_common.sh@1345 -- # local fc 00:28:32.620 22:27:27 -- common/autotest_common.sh@1346 -- # local cs 00:28:32.620 22:27:27 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:32.879 22:27:27 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:28:32.879 { 00:28:32.879 "uuid": "ce27f530-6289-4b17-b661-bffbf4bf97d3", 00:28:32.879 "name": "lvs_0", 00:28:32.879 "base_bdev": "Nvme0n1", 00:28:32.879 "total_data_clusters": 930, 00:28:32.879 "free_clusters": 0, 00:28:32.879 "block_size": 512, 00:28:32.879 "cluster_size": 1073741824 00:28:32.879 }, 00:28:32.879 { 00:28:32.879 "uuid": "2c70c968-4245-4e50-b350-64e82b8f75de", 00:28:32.879 "name": "lvs_n_0", 00:28:32.879 "base_bdev": "a7322215-c93c-4f7a-8b12-2649d10b99f7", 00:28:32.879 "total_data_clusters": 237847, 00:28:32.879 "free_clusters": 237847, 00:28:32.879 "block_size": 512, 00:28:32.879 "cluster_size": 4194304 00:28:32.879 } 00:28:32.879 ]' 00:28:32.879 22:27:27 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="2c70c968-4245-4e50-b350-64e82b8f75de") .free_clusters' 00:28:32.879 22:27:27 -- common/autotest_common.sh@1348 -- # fc=237847 00:28:32.879 22:27:27 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="2c70c968-4245-4e50-b350-64e82b8f75de") .cluster_size' 00:28:32.879 22:27:27 -- common/autotest_common.sh@1349 -- # cs=4194304 00:28:32.879 22:27:27 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:28:32.879 22:27:27 -- common/autotest_common.sh@1353 -- # echo 951388 00:28:32.879 951388 00:28:32.879 22:27:27 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:28:33.448 db1856eb-489f-4c8c-b43c-6a4d587d1833 00:28:33.448 22:27:28 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:28:33.448 22:27:28 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:28:33.707 22:27:28 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:28:33.966 22:27:28 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:33.966 22:27:28 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:33.966 22:27:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:33.966 22:27:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:33.966 22:27:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:33.966 22:27:28 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:33.966 22:27:28 -- common/autotest_common.sh@1320 -- # shift 00:28:33.966 22:27:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:33.966 22:27:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.966 22:27:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:33.966 22:27:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:33.966 22:27:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:33.966 22:27:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:33.966 22:27:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:33.966 22:27:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.966 22:27:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:28:33.966 22:27:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:33.966 22:27:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:33.967 22:27:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:33.967 22:27:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:33.967 22:27:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:28:33.967 22:27:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:28:34.225 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:28:34.225 fio-3.35 00:28:34.225 Starting 1 thread 00:28:34.225 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.765 00:28:36.765 test: (groupid=0, jobs=1): err= 0: pid=3704765: Wed Jul 24 22:27:31 2024 00:28:36.765 read: IOPS=7712, BW=30.1MiB/s (31.6MB/s)(60.4MiB/2004msec) 00:28:36.765 slat (nsec): min=1590, max=106939, avg=1753.18, stdev=1126.17 00:28:36.765 clat (usec): min=4602, max=18543, avg=9457.32, stdev=1847.64 00:28:36.765 lat (usec): min=4604, max=18545, avg=9459.07, stdev=1847.67 00:28:36.765 clat percentiles (usec): 00:28:36.765 | 1.00th=[ 6456], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 8094], 00:28:36.765 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:28:36.765 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11731], 95.00th=[13304], 00:28:36.765 | 99.00th=[16319], 99.50th=[16909], 99.90th=[18220], 99.95th=[18482], 00:28:36.765 | 99.99th=[18482] 00:28:36.765 bw ( KiB/s): min=29144, max=31536, per=99.76%, avg=30776.00, stdev=1102.20, samples=4 00:28:36.765 iops : min= 7286, max= 7884, avg=7694.00, stdev=275.55, samples=4 00:28:36.765 write: IOPS=7705, BW=30.1MiB/s (31.6MB/s)(60.3MiB/2004msec); 0 zone resets 00:28:36.765 slat (nsec): min=1660, max=81756, avg=1818.66, stdev=766.89 00:28:36.765 clat (usec): min=2318, max=14591, avg=7008.59, stdev=1197.26 00:28:36.765 lat (usec): min=2322, max=14605, avg=7010.41, stdev=1197.32 00:28:36.765 clat percentiles (usec): 00:28:36.765 | 1.00th=[ 4228], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6063], 00:28:36.765 | 30.00th=[ 6456], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7242], 00:28:36.765 | 70.00th=[ 7504], 80.00th=[ 7832], 90.00th=[ 8356], 95.00th=[ 8979], 00:28:36.765 | 99.00th=[10421], 99.50th=[10945], 99.90th=[12125], 99.95th=[13304], 00:28:36.765 | 99.99th=[14484] 00:28:36.765 bw ( KiB/s): min=30240, max=31064, per=99.82%, avg=30764.00, stdev=374.35, samples=4 00:28:36.765 iops : min= 7560, max= 7766, avg=7691.00, stdev=93.59, samples=4 00:28:36.765 lat (msec) : 4=0.23%, 10=85.41%, 20=14.36% 00:28:36.765 cpu : usr=64.30%, sys=29.36%, ctx=53, majf=0, minf=32 00:28:36.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:28:36.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:28:36.765 issued rwts: total=15456,15441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:36.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:28:36.765 00:28:36.765 Run status group 0 (all jobs): 00:28:36.765 READ: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.4MiB (63.3MB), run=2004-2004msec 00:28:36.766 WRITE: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.3MiB (63.2MB), run=2004-2004msec 00:28:36.766 22:27:31 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:36.766 22:27:31 -- host/fio.sh@74 -- # sync 00:28:36.766 22:27:31 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:28:40.963 22:27:35 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:28:40.963 22:27:35 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:28:43.503 22:27:38 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:28:43.762 22:27:38 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:28:45.697 22:27:40 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:45.697 22:27:40 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:28:45.697 22:27:40 -- host/fio.sh@86 -- # nvmftestfini 00:28:45.697 22:27:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:45.697 22:27:40 -- nvmf/common.sh@116 -- # sync 00:28:45.697 22:27:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:45.697 22:27:40 -- nvmf/common.sh@119 -- # set +e 00:28:45.697 22:27:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:45.697 22:27:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:45.697 rmmod nvme_tcp 00:28:45.697 rmmod nvme_fabrics 00:28:45.697 rmmod nvme_keyring 00:28:45.698 22:27:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:45.698 22:27:40 -- nvmf/common.sh@123 -- # set -e 00:28:45.698 22:27:40 -- nvmf/common.sh@124 -- # return 0 00:28:45.698 22:27:40 -- nvmf/common.sh@477 -- # '[' -n 3700748 ']' 00:28:45.698 22:27:40 -- nvmf/common.sh@478 -- # killprocess 3700748 00:28:45.698 22:27:40 -- common/autotest_common.sh@926 -- # '[' -z 3700748 ']' 00:28:45.698 22:27:40 -- common/autotest_common.sh@930 -- # kill -0 3700748 00:28:45.698 22:27:40 -- common/autotest_common.sh@931 -- # uname 00:28:45.698 22:27:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:45.698 22:27:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3700748 00:28:45.698 22:27:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:45.698 22:27:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:45.698 22:27:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3700748' 00:28:45.698 killing process with pid 3700748 00:28:45.698 22:27:40 -- common/autotest_common.sh@945 -- # kill 3700748 00:28:45.698 22:27:40 -- common/autotest_common.sh@950 -- # wait 3700748 00:28:45.698 22:27:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:45.698 22:27:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:45.698 22:27:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:45.698 22:27:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.698 22:27:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:45.698 22:27:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.698 22:27:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.698 22:27:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.239 22:27:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:48.239 00:28:48.239 real 0m39.472s 00:28:48.239 user 2m38.026s 00:28:48.239 sys 0m8.611s 00:28:48.239 22:27:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.239 22:27:42 -- common/autotest_common.sh@10 -- # set +x 00:28:48.239 ************************************ 00:28:48.239 END TEST nvmf_fio_host 00:28:48.239 ************************************ 00:28:48.239 22:27:42 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:48.239 22:27:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:48.239 22:27:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:48.239 22:27:42 -- common/autotest_common.sh@10 -- # set +x 00:28:48.239 ************************************ 00:28:48.239 START TEST nvmf_failover 00:28:48.239 ************************************ 00:28:48.239 22:27:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:28:48.239 * Looking for test storage... 00:28:48.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.239 22:27:42 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.239 22:27:42 -- nvmf/common.sh@7 -- # uname -s 00:28:48.239 22:27:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.239 22:27:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.239 22:27:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.239 22:27:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.239 22:27:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.239 22:27:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.239 22:27:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.239 22:27:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.239 22:27:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.239 22:27:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.239 22:27:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:48.239 22:27:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:48.239 22:27:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.239 22:27:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.239 22:27:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.239 22:27:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.239 22:27:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.239 22:27:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.239 22:27:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.239 22:27:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.240 22:27:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.240 22:27:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.240 22:27:42 -- paths/export.sh@5 -- # export PATH 00:28:48.240 22:27:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.240 22:27:42 -- nvmf/common.sh@46 -- # : 0 00:28:48.240 22:27:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:48.240 22:27:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:48.240 22:27:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:48.240 22:27:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.240 22:27:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.240 22:27:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:48.240 22:27:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:48.240 22:27:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:48.240 22:27:42 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:48.240 22:27:42 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:48.240 22:27:42 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:48.240 22:27:42 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:48.240 22:27:42 -- host/failover.sh@18 -- # nvmftestinit 00:28:48.240 22:27:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:48.240 22:27:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.240 22:27:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:48.240 22:27:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:48.240 22:27:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:48.240 22:27:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.240 22:27:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.240 22:27:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.240 22:27:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:48.240 22:27:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:48.240 22:27:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:48.240 22:27:42 -- common/autotest_common.sh@10 -- # set +x 00:28:53.518 22:27:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:53.518 22:27:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:53.518 22:27:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:53.518 22:27:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:53.518 22:27:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:53.518 22:27:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:53.518 22:27:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:53.518 22:27:47 -- nvmf/common.sh@294 -- # net_devs=() 00:28:53.518 22:27:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:53.518 22:27:47 -- nvmf/common.sh@295 -- # e810=() 00:28:53.518 22:27:47 -- nvmf/common.sh@295 -- # local -ga e810 00:28:53.518 22:27:47 -- nvmf/common.sh@296 -- # x722=() 00:28:53.518 22:27:47 -- nvmf/common.sh@296 -- # local -ga x722 00:28:53.518 22:27:47 -- nvmf/common.sh@297 -- # mlx=() 00:28:53.518 22:27:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:53.518 22:27:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:53.518 22:27:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:53.518 22:27:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:53.518 22:27:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:53.518 22:27:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:53.518 22:27:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:53.518 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:53.518 22:27:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:53.518 22:27:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:53.518 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:53.518 22:27:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:53.518 22:27:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:53.518 22:27:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.518 22:27:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:53.518 22:27:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.518 22:27:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:53.518 Found net devices under 0000:86:00.0: cvl_0_0 00:28:53.518 22:27:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.518 22:27:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:53.518 22:27:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:53.518 22:27:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:53.518 22:27:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:53.518 22:27:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:53.518 Found net devices under 0000:86:00.1: cvl_0_1 00:28:53.518 22:27:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:53.518 22:27:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:53.518 22:27:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:53.518 22:27:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:53.518 22:27:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:53.518 22:27:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:53.518 22:27:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:53.518 22:27:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:53.518 22:27:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:53.518 22:27:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:53.518 22:27:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:53.518 22:27:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:53.518 22:27:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:53.518 22:27:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:53.518 22:27:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:53.518 22:27:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:53.518 22:27:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:53.518 22:27:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:53.518 22:27:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:53.518 22:27:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:53.518 22:27:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:53.518 22:27:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:53.518 22:27:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:53.518 22:27:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:53.518 22:27:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:53.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:53.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:28:53.518 00:28:53.518 --- 10.0.0.2 ping statistics --- 00:28:53.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.518 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:28:53.518 22:27:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:53.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:53.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:28:53.518 00:28:53.518 --- 10.0.0.1 ping statistics --- 00:28:53.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:53.518 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:28:53.518 22:27:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:53.518 22:27:48 -- nvmf/common.sh@410 -- # return 0 00:28:53.518 22:27:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:53.518 22:27:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:53.518 22:27:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:53.518 22:27:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:53.518 22:27:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:53.518 22:27:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:53.518 22:27:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:53.518 22:27:48 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:53.518 22:27:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:53.518 22:27:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:53.519 22:27:48 -- common/autotest_common.sh@10 -- # set +x 00:28:53.519 22:27:48 -- nvmf/common.sh@469 -- # nvmfpid=3709939 00:28:53.519 22:27:48 -- nvmf/common.sh@470 -- # waitforlisten 3709939 00:28:53.519 22:27:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:53.519 22:27:48 -- common/autotest_common.sh@819 -- # '[' -z 3709939 ']' 00:28:53.519 22:27:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.519 22:27:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:53.519 22:27:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.519 22:27:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:53.519 22:27:48 -- common/autotest_common.sh@10 -- # set +x 00:28:53.519 [2024-07-24 22:27:48.207408] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:28:53.519 [2024-07-24 22:27:48.207452] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.519 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.519 [2024-07-24 22:27:48.263791] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:53.519 [2024-07-24 22:27:48.302727] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:53.519 [2024-07-24 22:27:48.302841] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.519 [2024-07-24 22:27:48.302849] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.519 [2024-07-24 22:27:48.302856] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.519 [2024-07-24 22:27:48.302958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:53.519 [2024-07-24 22:27:48.303050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.519 [2024-07-24 22:27:48.303052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:54.086 22:27:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:54.086 22:27:49 -- common/autotest_common.sh@852 -- # return 0 00:28:54.086 22:27:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:54.086 22:27:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:54.086 22:27:49 -- common/autotest_common.sh@10 -- # set +x 00:28:54.086 22:27:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.086 22:27:49 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:54.086 [2024-07-24 22:27:49.207472] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.346 22:27:49 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:54.346 Malloc0 00:28:54.346 22:27:49 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:54.605 22:27:49 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:54.863 22:27:49 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.863 [2024-07-24 22:27:49.920107] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.863 22:27:49 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:55.122 [2024-07-24 22:27:50.096718] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:55.122 22:27:50 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:55.382 [2024-07-24 22:27:50.265267] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:55.382 22:27:50 -- host/failover.sh@31 -- # bdevperf_pid=3710210 00:28:55.382 22:27:50 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:55.382 22:27:50 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:55.382 22:27:50 -- host/failover.sh@34 -- # waitforlisten 3710210 /var/tmp/bdevperf.sock 00:28:55.382 22:27:50 -- common/autotest_common.sh@819 -- # '[' -z 3710210 ']' 00:28:55.382 22:27:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:55.382 22:27:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:55.382 22:27:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:55.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:55.382 22:27:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:55.382 22:27:50 -- common/autotest_common.sh@10 -- # set +x 00:28:56.319 22:27:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:56.319 22:27:51 -- common/autotest_common.sh@852 -- # return 0 00:28:56.319 22:27:51 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:56.578 NVMe0n1 00:28:56.578 22:27:51 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:56.837 00:28:56.837 22:27:51 -- host/failover.sh@39 -- # run_test_pid=3710544 00:28:56.837 22:27:51 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:56.837 22:27:51 -- host/failover.sh@41 -- # sleep 1 00:28:58.217 22:27:52 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.217 [2024-07-24 22:27:53.108890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.108945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.108952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.108959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.108966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.108972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.108978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.108984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.108991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.108997] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.217 [2024-07-24 22:27:53.109144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 [2024-07-24 22:27:53.109342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282260 is same with the state(5) to be set 00:28:58.218 22:27:53 -- host/failover.sh@45 -- # sleep 3 00:29:01.511 22:27:56 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:01.511 00:29:01.511 22:27:56 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:01.511 [2024-07-24 22:27:56.611860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.611999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 [2024-07-24 22:27:56.612164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12837b0 is same with the state(5) to be set 00:29:01.511 22:27:56 -- host/failover.sh@50 -- # sleep 3 00:29:04.806 22:27:59 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.806 [2024-07-24 22:27:59.803410] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.806 22:27:59 -- host/failover.sh@55 -- # sleep 1 00:29:05.744 22:28:00 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:06.003 [2024-07-24 22:28:00.998096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.003 [2024-07-24 22:28:00.998208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 [2024-07-24 22:28:00.998576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1284580 is same with the state(5) to be set 00:29:06.004 22:28:01 -- host/failover.sh@59 -- # wait 3710544 00:29:12.650 0 00:29:12.650 22:28:07 -- host/failover.sh@61 -- # killprocess 3710210 00:29:12.650 22:28:07 -- common/autotest_common.sh@926 -- # '[' -z 3710210 ']' 00:29:12.650 22:28:07 -- common/autotest_common.sh@930 -- # kill -0 3710210 00:29:12.650 22:28:07 -- common/autotest_common.sh@931 -- # uname 00:29:12.650 22:28:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:12.650 22:28:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3710210 00:29:12.650 22:28:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:12.650 22:28:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:12.650 22:28:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3710210' 00:29:12.650 killing process with pid 3710210 00:29:12.650 22:28:07 -- common/autotest_common.sh@945 -- # kill 3710210 00:29:12.650 22:28:07 -- common/autotest_common.sh@950 -- # wait 3710210 00:29:12.650 22:28:07 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:12.650 [2024-07-24 22:27:50.334507] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:12.651 [2024-07-24 22:27:50.334558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710210 ] 00:29:12.651 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.651 [2024-07-24 22:27:50.390724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.651 [2024-07-24 22:27:50.429281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.651 Running I/O for 15 seconds... 00:29:12.651 [2024-07-24 22:27:53.109636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.109993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.109999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.651 [2024-07-24 22:27:53.110222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.651 [2024-07-24 22:27:53.110229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.652 [2024-07-24 22:27:53.110437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.652 [2024-07-24 22:27:53.110466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.652 [2024-07-24 22:27:53.110496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.652 [2024-07-24 22:27:53.110511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.652 [2024-07-24 22:27:53.110689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.652 [2024-07-24 22:27:53.110703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.652 [2024-07-24 22:27:53.110732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.652 [2024-07-24 22:27:53.110746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.652 [2024-07-24 22:27:53.110754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.110761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.110775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.110791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.110806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.110820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.110985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.110993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:9104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.653 [2024-07-24 22:27:53.111240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.653 [2024-07-24 22:27:53.111314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.653 [2024-07-24 22:27:53.111322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.654 [2024-07-24 22:27:53.111390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.654 [2024-07-24 22:27:53.111405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.654 [2024-07-24 22:27:53.111425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.654 [2024-07-24 22:27:53.111454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.654 [2024-07-24 22:27:53.111468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:53.111571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89b020 is same with the state(5) to be set 00:29:12.654 [2024-07-24 22:27:53.111587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:12.654 [2024-07-24 22:27:53.111592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:12.654 [2024-07-24 22:27:53.111602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8784 len:8 PRP1 0x0 PRP2 0x0 00:29:12.654 [2024-07-24 22:27:53.111608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111651] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x89b020 was disconnected and freed. reset controller. 00:29:12.654 [2024-07-24 22:27:53.111664] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:12.654 [2024-07-24 22:27:53.111686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.654 [2024-07-24 22:27:53.111693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.654 [2024-07-24 22:27:53.111708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.654 [2024-07-24 22:27:53.111722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.654 [2024-07-24 22:27:53.111735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:53.111742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.654 [2024-07-24 22:27:53.113719] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.654 [2024-07-24 22:27:53.113743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b940 (9): Bad file descriptor 00:29:12.654 [2024-07-24 22:27:53.146621] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:12.654 [2024-07-24 22:27:56.612338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.654 [2024-07-24 22:27:56.612546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.654 [2024-07-24 22:27:56.612554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.655 [2024-07-24 22:27:56.612938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.612989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.612997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.613004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.613011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.613019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.613025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.655 [2024-07-24 22:27:56.613033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.655 [2024-07-24 22:27:56.613039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:91120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.656 [2024-07-24 22:27:56.613605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.656 [2024-07-24 22:27:56.613615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.656 [2024-07-24 22:27:56.613623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:91232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.657 [2024-07-24 22:27:56.613653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.657 [2024-07-24 22:27:56.613668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.657 [2024-07-24 22:27:56.613684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.657 [2024-07-24 22:27:56.613700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.657 [2024-07-24 22:27:56.613714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.657 [2024-07-24 22:27:56.613888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.613991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.613999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.657 [2024-07-24 22:27:56.614034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:91344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:91360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.657 [2024-07-24 22:27:56.614157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.657 [2024-07-24 22:27:56.614186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.657 [2024-07-24 22:27:56.614194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.657 [2024-07-24 22:27:56.614200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:27:56.614214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:27:56.614229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:27:56.614243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:27:56.614257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:27:56.614273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:27:56.614287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:27:56.614302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x887e40 is same with the state(5) to be set 00:29:12.658 [2024-07-24 22:27:56.614317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:12.658 [2024-07-24 22:27:56.614323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:12.658 [2024-07-24 22:27:56.614330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90872 len:8 PRP1 0x0 PRP2 0x0 00:29:12.658 [2024-07-24 22:27:56.614337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614379] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x887e40 was disconnected and freed. reset controller. 00:29:12.658 [2024-07-24 22:27:56.614387] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:29:12.658 [2024-07-24 22:27:56.614408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.658 [2024-07-24 22:27:56.614416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.658 [2024-07-24 22:27:56.614429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.658 [2024-07-24 22:27:56.614443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.658 [2024-07-24 22:27:56.614456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:27:56.614463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.658 [2024-07-24 22:27:56.616347] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.658 [2024-07-24 22:27:56.616373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b940 (9): Bad file descriptor 00:29:12.658 [2024-07-24 22:27:56.646058] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:12.658 [2024-07-24 22:28:00.998736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:26272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.998987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.998995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.999001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.999009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.999017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.999025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.999033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.999047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.999054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.999062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.999068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.999076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.999082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.658 [2024-07-24 22:28:00.999090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.658 [2024-07-24 22:28:00.999097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:26408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.659 [2024-07-24 22:28:00.999588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.659 [2024-07-24 22:28:00.999632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.659 [2024-07-24 22:28:00.999640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:00.999647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:00.999781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:00.999796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:00.999811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:00.999826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:00.999882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:00.999989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:00.999996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:01.000005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:01.000011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:01.000019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:01.000026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:01.000034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:01.000040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:01.000052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:01.000058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:01.000066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.660 [2024-07-24 22:28:01.000073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.660 [2024-07-24 22:28:01.000081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.660 [2024-07-24 22:28:01.000087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:26728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:26744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:26848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:12.661 [2024-07-24 22:28:01.000513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.661 [2024-07-24 22:28:01.000626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.661 [2024-07-24 22:28:01.000648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:12.661 [2024-07-24 22:28:01.000658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:12.662 [2024-07-24 22:28:01.000664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26392 len:8 PRP1 0x0 PRP2 0x0 00:29:12.662 [2024-07-24 22:28:01.000676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.662 [2024-07-24 22:28:01.000717] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8aa2e0 was disconnected and freed. reset controller. 00:29:12.662 [2024-07-24 22:28:01.000726] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:29:12.662 [2024-07-24 22:28:01.000745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.662 [2024-07-24 22:28:01.000753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.662 [2024-07-24 22:28:01.000760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.662 [2024-07-24 22:28:01.000768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.662 [2024-07-24 22:28:01.000775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.662 [2024-07-24 22:28:01.000782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.662 [2024-07-24 22:28:01.000789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.662 [2024-07-24 22:28:01.000796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.662 [2024-07-24 22:28:01.000803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:12.662 [2024-07-24 22:28:01.002707] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:12.662 [2024-07-24 22:28:01.002733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87b940 (9): Bad file descriptor 00:29:12.662 [2024-07-24 22:28:01.073127] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:12.662 00:29:12.662 Latency(us) 00:29:12.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.662 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:12.662 Verification LBA range: start 0x0 length 0x4000 00:29:12.662 NVMe0n1 : 15.00 16309.55 63.71 630.54 0.00 7542.75 1068.52 21655.37 00:29:12.662 =================================================================================================================== 00:29:12.662 Total : 16309.55 63.71 630.54 0.00 7542.75 1068.52 21655.37 00:29:12.662 Received shutdown signal, test time was about 15.000000 seconds 00:29:12.662 00:29:12.662 Latency(us) 00:29:12.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.662 =================================================================================================================== 00:29:12.662 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.662 22:28:07 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:29:12.662 22:28:07 -- host/failover.sh@65 -- # count=3 00:29:12.662 22:28:07 -- host/failover.sh@67 -- # (( count != 3 )) 00:29:12.662 22:28:07 -- host/failover.sh@73 -- # bdevperf_pid=3713009 00:29:12.662 22:28:07 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:29:12.662 22:28:07 -- host/failover.sh@75 -- # waitforlisten 3713009 /var/tmp/bdevperf.sock 00:29:12.662 22:28:07 -- common/autotest_common.sh@819 -- # '[' -z 3713009 ']' 00:29:12.662 22:28:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:12.662 22:28:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:12.662 22:28:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:12.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:12.662 22:28:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:12.662 22:28:07 -- common/autotest_common.sh@10 -- # set +x 00:29:13.232 22:28:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:13.232 22:28:08 -- common/autotest_common.sh@852 -- # return 0 00:29:13.232 22:28:08 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:13.232 [2024-07-24 22:28:08.315149] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:13.232 22:28:08 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:13.491 [2024-07-24 22:28:08.479587] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:13.491 22:28:08 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:13.751 NVMe0n1 00:29:13.751 22:28:08 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:14.010 00:29:14.010 22:28:09 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:14.270 00:29:14.270 22:28:09 -- host/failover.sh@82 -- # grep -q NVMe0 00:29:14.270 22:28:09 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:14.529 22:28:09 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:14.789 22:28:09 -- host/failover.sh@87 -- # sleep 3 00:29:18.082 22:28:12 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:18.082 22:28:12 -- host/failover.sh@88 -- # grep -q NVMe0 00:29:18.082 22:28:12 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:18.082 22:28:12 -- host/failover.sh@90 -- # run_test_pid=3713951 00:29:18.082 22:28:12 -- host/failover.sh@92 -- # wait 3713951 00:29:19.021 0 00:29:19.021 22:28:13 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:19.021 [2024-07-24 22:28:07.362384] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:19.021 [2024-07-24 22:28:07.362437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3713009 ] 00:29:19.021 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.021 [2024-07-24 22:28:07.418155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.021 [2024-07-24 22:28:07.453140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.021 [2024-07-24 22:28:09.639754] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:29:19.021 [2024-07-24 22:28:09.639800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.021 [2024-07-24 22:28:09.639811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.021 [2024-07-24 22:28:09.639820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.021 [2024-07-24 22:28:09.639826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.021 [2024-07-24 22:28:09.639834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.021 [2024-07-24 22:28:09.639840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.021 [2024-07-24 22:28:09.639847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:19.021 [2024-07-24 22:28:09.639854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:19.021 [2024-07-24 22:28:09.639861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.021 [2024-07-24 22:28:09.639881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.021 [2024-07-24 22:28:09.639894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa61940 (9): Bad file descriptor 00:29:19.022 [2024-07-24 22:28:09.651583] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:19.022 Running I/O for 1 seconds... 00:29:19.022 00:29:19.022 Latency(us) 00:29:19.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.022 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:19.022 Verification LBA range: start 0x0 length 0x4000 00:29:19.022 NVMe0n1 : 1.01 16376.13 63.97 0.00 0.00 7786.23 983.04 23251.03 00:29:19.022 =================================================================================================================== 00:29:19.022 Total : 16376.13 63.97 0.00 0.00 7786.23 983.04 23251.03 00:29:19.022 22:28:13 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:19.022 22:28:13 -- host/failover.sh@95 -- # grep -q NVMe0 00:29:19.022 22:28:14 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.281 22:28:14 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:19.281 22:28:14 -- host/failover.sh@99 -- # grep -q NVMe0 00:29:19.541 22:28:14 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:19.541 22:28:14 -- host/failover.sh@101 -- # sleep 3 00:29:22.835 22:28:17 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:22.835 22:28:17 -- host/failover.sh@103 -- # grep -q NVMe0 00:29:22.835 22:28:17 -- host/failover.sh@108 -- # killprocess 3713009 00:29:22.835 22:28:17 -- common/autotest_common.sh@926 -- # '[' -z 3713009 ']' 00:29:22.835 22:28:17 -- common/autotest_common.sh@930 -- # kill -0 3713009 00:29:22.835 22:28:17 -- common/autotest_common.sh@931 -- # uname 00:29:22.835 22:28:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:22.835 22:28:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3713009 00:29:22.835 22:28:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:22.835 22:28:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:22.835 22:28:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3713009' 00:29:22.835 killing process with pid 3713009 00:29:22.835 22:28:17 -- common/autotest_common.sh@945 -- # kill 3713009 00:29:22.835 22:28:17 -- common/autotest_common.sh@950 -- # wait 3713009 00:29:23.095 22:28:18 -- host/failover.sh@110 -- # sync 00:29:23.095 22:28:18 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:23.355 22:28:18 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:29:23.356 22:28:18 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:23.356 22:28:18 -- host/failover.sh@116 -- # nvmftestfini 00:29:23.356 22:28:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:23.356 22:28:18 -- nvmf/common.sh@116 -- # sync 00:29:23.356 22:28:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:23.356 22:28:18 -- nvmf/common.sh@119 -- # set +e 00:29:23.356 22:28:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:23.356 22:28:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:23.356 rmmod nvme_tcp 00:29:23.356 rmmod nvme_fabrics 00:29:23.356 rmmod nvme_keyring 00:29:23.356 22:28:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:23.356 22:28:18 -- nvmf/common.sh@123 -- # set -e 00:29:23.356 22:28:18 -- nvmf/common.sh@124 -- # return 0 00:29:23.356 22:28:18 -- nvmf/common.sh@477 -- # '[' -n 3709939 ']' 00:29:23.356 22:28:18 -- nvmf/common.sh@478 -- # killprocess 3709939 00:29:23.356 22:28:18 -- common/autotest_common.sh@926 -- # '[' -z 3709939 ']' 00:29:23.356 22:28:18 -- common/autotest_common.sh@930 -- # kill -0 3709939 00:29:23.356 22:28:18 -- common/autotest_common.sh@931 -- # uname 00:29:23.356 22:28:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:23.356 22:28:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3709939 00:29:23.356 22:28:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:23.356 22:28:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:23.356 22:28:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3709939' 00:29:23.356 killing process with pid 3709939 00:29:23.356 22:28:18 -- common/autotest_common.sh@945 -- # kill 3709939 00:29:23.356 22:28:18 -- common/autotest_common.sh@950 -- # wait 3709939 00:29:23.616 22:28:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:23.616 22:28:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:23.616 22:28:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:23.616 22:28:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.616 22:28:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:23.616 22:28:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.616 22:28:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.616 22:28:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.526 22:28:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:25.527 00:29:25.527 real 0m37.767s 00:29:25.527 user 2m2.329s 00:29:25.527 sys 0m7.272s 00:29:25.527 22:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.527 22:28:20 -- common/autotest_common.sh@10 -- # set +x 00:29:25.527 ************************************ 00:29:25.527 END TEST nvmf_failover 00:29:25.527 ************************************ 00:29:25.527 22:28:20 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:25.527 22:28:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:25.527 22:28:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.527 22:28:20 -- common/autotest_common.sh@10 -- # set +x 00:29:25.527 ************************************ 00:29:25.527 START TEST nvmf_discovery 00:29:25.527 ************************************ 00:29:25.527 22:28:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:29:25.787 * Looking for test storage... 00:29:25.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.787 22:28:20 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.787 22:28:20 -- nvmf/common.sh@7 -- # uname -s 00:29:25.787 22:28:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.787 22:28:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.787 22:28:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.787 22:28:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.787 22:28:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.787 22:28:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.787 22:28:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.787 22:28:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.787 22:28:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.787 22:28:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.787 22:28:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:25.787 22:28:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:25.787 22:28:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.787 22:28:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.787 22:28:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.787 22:28:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.787 22:28:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.787 22:28:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.787 22:28:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.787 22:28:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.787 22:28:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.787 22:28:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.787 22:28:20 -- paths/export.sh@5 -- # export PATH 00:29:25.787 22:28:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.787 22:28:20 -- nvmf/common.sh@46 -- # : 0 00:29:25.787 22:28:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:25.787 22:28:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:25.787 22:28:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:25.787 22:28:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.787 22:28:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.787 22:28:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:25.787 22:28:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:25.788 22:28:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:25.788 22:28:20 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:29:25.788 22:28:20 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:29:25.788 22:28:20 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:25.788 22:28:20 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:25.788 22:28:20 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:25.788 22:28:20 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:29:25.788 22:28:20 -- host/discovery.sh@25 -- # nvmftestinit 00:29:25.788 22:28:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:25.788 22:28:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.788 22:28:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:25.788 22:28:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:25.788 22:28:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:25.788 22:28:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.788 22:28:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.788 22:28:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.788 22:28:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:25.788 22:28:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:25.788 22:28:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:25.788 22:28:20 -- common/autotest_common.sh@10 -- # set +x 00:29:31.085 22:28:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:31.085 22:28:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:31.085 22:28:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:31.085 22:28:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:31.085 22:28:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:31.085 22:28:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:31.085 22:28:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:31.085 22:28:25 -- nvmf/common.sh@294 -- # net_devs=() 00:29:31.085 22:28:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:31.085 22:28:25 -- nvmf/common.sh@295 -- # e810=() 00:29:31.085 22:28:25 -- nvmf/common.sh@295 -- # local -ga e810 00:29:31.085 22:28:25 -- nvmf/common.sh@296 -- # x722=() 00:29:31.085 22:28:25 -- nvmf/common.sh@296 -- # local -ga x722 00:29:31.085 22:28:25 -- nvmf/common.sh@297 -- # mlx=() 00:29:31.085 22:28:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:31.085 22:28:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.085 22:28:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.085 22:28:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.085 22:28:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.085 22:28:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.085 22:28:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.085 22:28:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.085 22:28:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.086 22:28:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.086 22:28:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.086 22:28:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.086 22:28:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:31.086 22:28:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:31.086 22:28:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:31.086 22:28:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:31.086 22:28:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:31.086 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:31.086 22:28:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:31.086 22:28:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:31.086 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:31.086 22:28:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:31.086 22:28:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:31.086 22:28:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.086 22:28:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:31.086 22:28:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.086 22:28:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:31.086 Found net devices under 0000:86:00.0: cvl_0_0 00:29:31.086 22:28:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.086 22:28:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:31.086 22:28:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.086 22:28:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:31.086 22:28:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.086 22:28:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:31.086 Found net devices under 0000:86:00.1: cvl_0_1 00:29:31.086 22:28:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.086 22:28:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:31.086 22:28:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:31.086 22:28:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:31.086 22:28:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:31.086 22:28:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.086 22:28:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.086 22:28:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.086 22:28:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:31.086 22:28:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.086 22:28:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.086 22:28:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:31.086 22:28:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.086 22:28:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.086 22:28:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:31.086 22:28:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:31.086 22:28:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.086 22:28:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.086 22:28:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.086 22:28:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.086 22:28:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:31.086 22:28:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.086 22:28:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.086 22:28:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.086 22:28:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:31.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:29:31.086 00:29:31.086 --- 10.0.0.2 ping statistics --- 00:29:31.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.086 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:31.086 22:28:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:29:31.086 00:29:31.086 --- 10.0.0.1 ping statistics --- 00:29:31.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.086 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:29:31.086 22:28:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.086 22:28:26 -- nvmf/common.sh@410 -- # return 0 00:29:31.086 22:28:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:31.086 22:28:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.086 22:28:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:31.086 22:28:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:31.086 22:28:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.086 22:28:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:31.086 22:28:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:31.086 22:28:26 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:29:31.086 22:28:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:31.086 22:28:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:31.086 22:28:26 -- common/autotest_common.sh@10 -- # set +x 00:29:31.086 22:28:26 -- nvmf/common.sh@469 -- # nvmfpid=3718205 00:29:31.086 22:28:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:31.086 22:28:26 -- nvmf/common.sh@470 -- # waitforlisten 3718205 00:29:31.086 22:28:26 -- common/autotest_common.sh@819 -- # '[' -z 3718205 ']' 00:29:31.086 22:28:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.086 22:28:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:31.086 22:28:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.086 22:28:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:31.086 22:28:26 -- common/autotest_common.sh@10 -- # set +x 00:29:31.405 [2024-07-24 22:28:26.212342] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:31.405 [2024-07-24 22:28:26.212386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.405 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.405 [2024-07-24 22:28:26.287561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.405 [2024-07-24 22:28:26.326326] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:31.405 [2024-07-24 22:28:26.326437] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.405 [2024-07-24 22:28:26.326445] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.405 [2024-07-24 22:28:26.326451] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.405 [2024-07-24 22:28:26.326467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.974 22:28:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:31.974 22:28:27 -- common/autotest_common.sh@852 -- # return 0 00:29:31.974 22:28:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:31.974 22:28:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:31.974 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:31.974 22:28:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.974 22:28:27 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:31.974 22:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.974 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:31.974 [2024-07-24 22:28:27.047716] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.974 22:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.974 22:28:27 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:31.974 22:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.974 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:31.974 [2024-07-24 22:28:27.059847] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:31.974 22:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.974 22:28:27 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:31.974 22:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.974 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:31.974 null0 00:29:31.974 22:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.974 22:28:27 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:31.974 22:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.974 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:31.974 null1 00:29:31.974 22:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.974 22:28:27 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:29:31.974 22:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.974 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:31.974 22:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.974 22:28:27 -- host/discovery.sh@45 -- # hostpid=3718455 00:29:31.974 22:28:27 -- host/discovery.sh@46 -- # waitforlisten 3718455 /tmp/host.sock 00:29:31.974 22:28:27 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:31.974 22:28:27 -- common/autotest_common.sh@819 -- # '[' -z 3718455 ']' 00:29:31.975 22:28:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:29:31.975 22:28:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:31.975 22:28:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:31.975 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:31.975 22:28:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:31.975 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:32.234 [2024-07-24 22:28:27.131505] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:32.234 [2024-07-24 22:28:27.131546] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718455 ] 00:29:32.234 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.234 [2024-07-24 22:28:27.185541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.234 [2024-07-24 22:28:27.223278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:32.234 [2024-07-24 22:28:27.223394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.802 22:28:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:32.802 22:28:27 -- common/autotest_common.sh@852 -- # return 0 00:29:32.802 22:28:27 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:32.802 22:28:27 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:32.802 22:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.802 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:32.802 22:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.802 22:28:27 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:29:32.802 22:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:32.802 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:32.802 22:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:32.802 22:28:27 -- host/discovery.sh@72 -- # notify_id=0 00:29:33.062 22:28:27 -- host/discovery.sh@78 -- # get_subsystem_names 00:29:33.062 22:28:27 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.062 22:28:27 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.062 22:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.062 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:33.062 22:28:27 -- host/discovery.sh@59 -- # sort 00:29:33.062 22:28:27 -- host/discovery.sh@59 -- # xargs 00:29:33.062 22:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.062 22:28:27 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:29:33.062 22:28:27 -- host/discovery.sh@79 -- # get_bdev_list 00:29:33.062 22:28:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.062 22:28:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:33.062 22:28:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.062 22:28:27 -- host/discovery.sh@55 -- # sort 00:29:33.062 22:28:27 -- common/autotest_common.sh@10 -- # set +x 00:29:33.062 22:28:27 -- host/discovery.sh@55 -- # xargs 00:29:33.062 22:28:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.062 22:28:28 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:29:33.062 22:28:28 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:33.062 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.062 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.062 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.062 22:28:28 -- host/discovery.sh@82 -- # get_subsystem_names 00:29:33.062 22:28:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.062 22:28:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.062 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.062 22:28:28 -- host/discovery.sh@59 -- # sort 00:29:33.062 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.062 22:28:28 -- host/discovery.sh@59 -- # xargs 00:29:33.062 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.062 22:28:28 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:29:33.062 22:28:28 -- host/discovery.sh@83 -- # get_bdev_list 00:29:33.062 22:28:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.062 22:28:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:33.062 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.062 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.062 22:28:28 -- host/discovery.sh@55 -- # sort 00:29:33.062 22:28:28 -- host/discovery.sh@55 -- # xargs 00:29:33.062 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.062 22:28:28 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:29:33.062 22:28:28 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:33.062 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.062 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.062 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.062 22:28:28 -- host/discovery.sh@86 -- # get_subsystem_names 00:29:33.062 22:28:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.062 22:28:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.062 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.062 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.062 22:28:28 -- host/discovery.sh@59 -- # sort 00:29:33.062 22:28:28 -- host/discovery.sh@59 -- # xargs 00:29:33.062 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.321 22:28:28 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:29:33.321 22:28:28 -- host/discovery.sh@87 -- # get_bdev_list 00:29:33.321 22:28:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.321 22:28:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:33.321 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.321 22:28:28 -- host/discovery.sh@55 -- # sort 00:29:33.321 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.321 22:28:28 -- host/discovery.sh@55 -- # xargs 00:29:33.321 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.321 22:28:28 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:29:33.321 22:28:28 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:33.321 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.321 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.321 [2024-07-24 22:28:28.263060] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.321 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.321 22:28:28 -- host/discovery.sh@92 -- # get_subsystem_names 00:29:33.321 22:28:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:33.321 22:28:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:33.321 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.321 22:28:28 -- host/discovery.sh@59 -- # sort 00:29:33.321 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.321 22:28:28 -- host/discovery.sh@59 -- # xargs 00:29:33.321 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.321 22:28:28 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:29:33.321 22:28:28 -- host/discovery.sh@93 -- # get_bdev_list 00:29:33.321 22:28:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.321 22:28:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:33.321 22:28:28 -- host/discovery.sh@55 -- # sort 00:29:33.321 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.321 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.321 22:28:28 -- host/discovery.sh@55 -- # xargs 00:29:33.321 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.321 22:28:28 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:29:33.321 22:28:28 -- host/discovery.sh@94 -- # get_notification_count 00:29:33.321 22:28:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:33.321 22:28:28 -- host/discovery.sh@74 -- # jq '. | length' 00:29:33.321 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.321 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.321 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.321 22:28:28 -- host/discovery.sh@74 -- # notification_count=0 00:29:33.321 22:28:28 -- host/discovery.sh@75 -- # notify_id=0 00:29:33.321 22:28:28 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:29:33.321 22:28:28 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:33.321 22:28:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.321 22:28:28 -- common/autotest_common.sh@10 -- # set +x 00:29:33.321 22:28:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.321 22:28:28 -- host/discovery.sh@100 -- # sleep 1 00:29:33.889 [2024-07-24 22:28:28.979059] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:33.889 [2024-07-24 22:28:28.979086] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:33.889 [2024-07-24 22:28:28.979099] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:34.148 [2024-07-24 22:28:29.107499] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:34.407 [2024-07-24 22:28:29.291094] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:34.407 [2024-07-24 22:28:29.291114] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:34.407 22:28:29 -- host/discovery.sh@101 -- # get_subsystem_names 00:29:34.407 22:28:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:34.407 22:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.407 22:28:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:34.407 22:28:29 -- common/autotest_common.sh@10 -- # set +x 00:29:34.407 22:28:29 -- host/discovery.sh@59 -- # sort 00:29:34.407 22:28:29 -- host/discovery.sh@59 -- # xargs 00:29:34.407 22:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.407 22:28:29 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:34.407 22:28:29 -- host/discovery.sh@102 -- # get_bdev_list 00:29:34.407 22:28:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:34.407 22:28:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:34.407 22:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.407 22:28:29 -- host/discovery.sh@55 -- # sort 00:29:34.407 22:28:29 -- common/autotest_common.sh@10 -- # set +x 00:29:34.407 22:28:29 -- host/discovery.sh@55 -- # xargs 00:29:34.407 22:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.407 22:28:29 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:29:34.407 22:28:29 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:29:34.407 22:28:29 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:34.407 22:28:29 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:34.407 22:28:29 -- host/discovery.sh@63 -- # sort -n 00:29:34.407 22:28:29 -- host/discovery.sh@63 -- # xargs 00:29:34.407 22:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.407 22:28:29 -- common/autotest_common.sh@10 -- # set +x 00:29:34.407 22:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.666 22:28:29 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:29:34.666 22:28:29 -- host/discovery.sh@104 -- # get_notification_count 00:29:34.666 22:28:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:34.666 22:28:29 -- host/discovery.sh@74 -- # jq '. | length' 00:29:34.666 22:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.666 22:28:29 -- common/autotest_common.sh@10 -- # set +x 00:29:34.666 22:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.666 22:28:29 -- host/discovery.sh@74 -- # notification_count=1 00:29:34.666 22:28:29 -- host/discovery.sh@75 -- # notify_id=1 00:29:34.666 22:28:29 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:29:34.666 22:28:29 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:34.666 22:28:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.666 22:28:29 -- common/autotest_common.sh@10 -- # set +x 00:29:34.666 22:28:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.666 22:28:29 -- host/discovery.sh@109 -- # sleep 1 00:29:35.604 22:28:30 -- host/discovery.sh@110 -- # get_bdev_list 00:29:35.604 22:28:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:35.604 22:28:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:35.604 22:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.604 22:28:30 -- host/discovery.sh@55 -- # sort 00:29:35.604 22:28:30 -- common/autotest_common.sh@10 -- # set +x 00:29:35.604 22:28:30 -- host/discovery.sh@55 -- # xargs 00:29:35.604 22:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.604 22:28:30 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:35.604 22:28:30 -- host/discovery.sh@111 -- # get_notification_count 00:29:35.604 22:28:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:29:35.604 22:28:30 -- host/discovery.sh@74 -- # jq '. | length' 00:29:35.604 22:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.604 22:28:30 -- common/autotest_common.sh@10 -- # set +x 00:29:35.604 22:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.604 22:28:30 -- host/discovery.sh@74 -- # notification_count=1 00:29:35.604 22:28:30 -- host/discovery.sh@75 -- # notify_id=2 00:29:35.604 22:28:30 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:29:35.604 22:28:30 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:35.604 22:28:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:35.604 22:28:30 -- common/autotest_common.sh@10 -- # set +x 00:29:35.604 [2024-07-24 22:28:30.725900] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:35.604 [2024-07-24 22:28:30.726637] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:35.604 [2024-07-24 22:28:30.726664] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:35.604 22:28:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:35.604 22:28:30 -- host/discovery.sh@117 -- # sleep 1 00:29:35.863 [2024-07-24 22:28:30.813896] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:29:36.122 [2024-07-24 22:28:31.120397] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:36.122 [2024-07-24 22:28:31.120414] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:36.122 [2024-07-24 22:28:31.120419] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:36.689 22:28:31 -- host/discovery.sh@118 -- # get_subsystem_names 00:29:36.689 22:28:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:36.689 22:28:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:36.689 22:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.689 22:28:31 -- host/discovery.sh@59 -- # sort 00:29:36.689 22:28:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.689 22:28:31 -- host/discovery.sh@59 -- # xargs 00:29:36.689 22:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.689 22:28:31 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:36.689 22:28:31 -- host/discovery.sh@119 -- # get_bdev_list 00:29:36.689 22:28:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:36.689 22:28:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:36.689 22:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.689 22:28:31 -- host/discovery.sh@55 -- # sort 00:29:36.689 22:28:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.689 22:28:31 -- host/discovery.sh@55 -- # xargs 00:29:36.689 22:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.948 22:28:31 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:36.948 22:28:31 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:29:36.948 22:28:31 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:36.948 22:28:31 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:36.948 22:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.948 22:28:31 -- host/discovery.sh@63 -- # sort -n 00:29:36.948 22:28:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.948 22:28:31 -- host/discovery.sh@63 -- # xargs 00:29:36.948 22:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.948 22:28:31 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:36.948 22:28:31 -- host/discovery.sh@121 -- # get_notification_count 00:29:36.948 22:28:31 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:36.948 22:28:31 -- host/discovery.sh@74 -- # jq '. | length' 00:29:36.948 22:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.948 22:28:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.948 22:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.948 22:28:31 -- host/discovery.sh@74 -- # notification_count=0 00:29:36.948 22:28:31 -- host/discovery.sh@75 -- # notify_id=2 00:29:36.948 22:28:31 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:29:36.948 22:28:31 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:36.948 22:28:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:36.948 22:28:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.948 [2024-07-24 22:28:31.937509] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:36.948 [2024-07-24 22:28:31.937531] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:36.948 22:28:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:36.948 22:28:31 -- host/discovery.sh@127 -- # sleep 1 00:29:36.948 [2024-07-24 22:28:31.945449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.948 [2024-07-24 22:28:31.945467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.948 [2024-07-24 22:28:31.945475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.948 [2024-07-24 22:28:31.945482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.948 [2024-07-24 22:28:31.945489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.948 [2024-07-24 22:28:31.945496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.948 [2024-07-24 22:28:31.945504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:36.948 [2024-07-24 22:28:31.945510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.948 [2024-07-24 22:28:31.945517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148de80 is same with the state(5) to be set 00:29:36.948 [2024-07-24 22:28:31.955462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148de80 (9): Bad file descriptor 00:29:36.948 [2024-07-24 22:28:31.965500] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.948 [2024-07-24 22:28:31.965899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-07-24 22:28:31.966377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-07-24 22:28:31.966389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148de80 with addr=10.0.0.2, port=4420 00:29:36.948 [2024-07-24 22:28:31.966397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148de80 is same with the state(5) to be set 00:29:36.948 [2024-07-24 22:28:31.966409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148de80 (9): Bad file descriptor 00:29:36.948 [2024-07-24 22:28:31.966431] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.948 [2024-07-24 22:28:31.966439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.948 [2024-07-24 22:28:31.966446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.948 [2024-07-24 22:28:31.966457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.948 [2024-07-24 22:28:31.975551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.948 [2024-07-24 22:28:31.976018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-07-24 22:28:31.976538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-07-24 22:28:31.976550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148de80 with addr=10.0.0.2, port=4420 00:29:36.948 [2024-07-24 22:28:31.976561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148de80 is same with the state(5) to be set 00:29:36.948 [2024-07-24 22:28:31.976573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148de80 (9): Bad file descriptor 00:29:36.948 [2024-07-24 22:28:31.976589] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.948 [2024-07-24 22:28:31.976596] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.948 [2024-07-24 22:28:31.976602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.948 [2024-07-24 22:28:31.976611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.948 [2024-07-24 22:28:31.985601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.948 [2024-07-24 22:28:31.986029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.948 [2024-07-24 22:28:31.986448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-07-24 22:28:31.986459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148de80 with addr=10.0.0.2, port=4420 00:29:36.949 [2024-07-24 22:28:31.986466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148de80 is same with the state(5) to be set 00:29:36.949 [2024-07-24 22:28:31.986486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148de80 (9): Bad file descriptor 00:29:36.949 [2024-07-24 22:28:31.986501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.949 [2024-07-24 22:28:31.986508] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.949 [2024-07-24 22:28:31.986514] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.949 [2024-07-24 22:28:31.986524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.949 [2024-07-24 22:28:31.995656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.949 [2024-07-24 22:28:31.996188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-07-24 22:28:31.996665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-07-24 22:28:31.996675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148de80 with addr=10.0.0.2, port=4420 00:29:36.949 [2024-07-24 22:28:31.996681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148de80 is same with the state(5) to be set 00:29:36.949 [2024-07-24 22:28:31.996697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148de80 (9): Bad file descriptor 00:29:36.949 [2024-07-24 22:28:31.996711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.949 [2024-07-24 22:28:31.996717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.949 [2024-07-24 22:28:31.996723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.949 [2024-07-24 22:28:31.996732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.949 [2024-07-24 22:28:32.005705] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.949 [2024-07-24 22:28:32.006133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-07-24 22:28:32.006542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-07-24 22:28:32.006553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148de80 with addr=10.0.0.2, port=4420 00:29:36.949 [2024-07-24 22:28:32.006559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148de80 is same with the state(5) to be set 00:29:36.949 [2024-07-24 22:28:32.006572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148de80 (9): Bad file descriptor 00:29:36.949 [2024-07-24 22:28:32.006593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.949 [2024-07-24 22:28:32.006600] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.949 [2024-07-24 22:28:32.006606] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.949 [2024-07-24 22:28:32.006615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.949 [2024-07-24 22:28:32.015753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.949 [2024-07-24 22:28:32.016286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-07-24 22:28:32.016790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.949 [2024-07-24 22:28:32.016800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x148de80 with addr=10.0.0.2, port=4420 00:29:36.949 [2024-07-24 22:28:32.016807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148de80 is same with the state(5) to be set 00:29:36.949 [2024-07-24 22:28:32.016817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x148de80 (9): Bad file descriptor 00:29:36.949 [2024-07-24 22:28:32.016831] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:36.949 [2024-07-24 22:28:32.016838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:36.949 [2024-07-24 22:28:32.016844] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:36.949 [2024-07-24 22:28:32.016853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.949 [2024-07-24 22:28:32.023855] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:36.949 [2024-07-24 22:28:32.023871] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:37.884 22:28:32 -- host/discovery.sh@128 -- # get_subsystem_names 00:29:37.884 22:28:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:37.884 22:28:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:37.884 22:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.884 22:28:32 -- host/discovery.sh@59 -- # sort 00:29:37.884 22:28:32 -- common/autotest_common.sh@10 -- # set +x 00:29:37.884 22:28:32 -- host/discovery.sh@59 -- # xargs 00:29:37.884 22:28:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.884 22:28:32 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:37.884 22:28:32 -- host/discovery.sh@129 -- # get_bdev_list 00:29:37.884 22:28:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:37.884 22:28:32 -- host/discovery.sh@55 -- # xargs 00:29:37.884 22:28:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:37.884 22:28:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.884 22:28:32 -- host/discovery.sh@55 -- # sort 00:29:37.884 22:28:32 -- common/autotest_common.sh@10 -- # set +x 00:29:38.142 22:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.142 22:28:33 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:38.142 22:28:33 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:29:38.142 22:28:33 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:29:38.142 22:28:33 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:38.142 22:28:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.142 22:28:33 -- host/discovery.sh@63 -- # sort -n 00:29:38.142 22:28:33 -- common/autotest_common.sh@10 -- # set +x 00:29:38.142 22:28:33 -- host/discovery.sh@63 -- # xargs 00:29:38.142 22:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.142 22:28:33 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:29:38.142 22:28:33 -- host/discovery.sh@131 -- # get_notification_count 00:29:38.142 22:28:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:38.142 22:28:33 -- host/discovery.sh@74 -- # jq '. | length' 00:29:38.142 22:28:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.142 22:28:33 -- common/autotest_common.sh@10 -- # set +x 00:29:38.142 22:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.142 22:28:33 -- host/discovery.sh@74 -- # notification_count=0 00:29:38.142 22:28:33 -- host/discovery.sh@75 -- # notify_id=2 00:29:38.142 22:28:33 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:29:38.142 22:28:33 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:29:38.142 22:28:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.142 22:28:33 -- common/autotest_common.sh@10 -- # set +x 00:29:38.142 22:28:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.142 22:28:33 -- host/discovery.sh@135 -- # sleep 1 00:29:39.083 22:28:34 -- host/discovery.sh@136 -- # get_subsystem_names 00:29:39.083 22:28:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:39.083 22:28:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:29:39.083 22:28:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.083 22:28:34 -- common/autotest_common.sh@10 -- # set +x 00:29:39.083 22:28:34 -- host/discovery.sh@59 -- # sort 00:29:39.083 22:28:34 -- host/discovery.sh@59 -- # xargs 00:29:39.083 22:28:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.083 22:28:34 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:29:39.083 22:28:34 -- host/discovery.sh@137 -- # get_bdev_list 00:29:39.083 22:28:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.083 22:28:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:39.083 22:28:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.083 22:28:34 -- host/discovery.sh@55 -- # sort 00:29:39.083 22:28:34 -- common/autotest_common.sh@10 -- # set +x 00:29:39.083 22:28:34 -- host/discovery.sh@55 -- # xargs 00:29:39.342 22:28:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.342 22:28:34 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:29:39.342 22:28:34 -- host/discovery.sh@138 -- # get_notification_count 00:29:39.342 22:28:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:39.342 22:28:34 -- host/discovery.sh@74 -- # jq '. | length' 00:29:39.342 22:28:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.342 22:28:34 -- common/autotest_common.sh@10 -- # set +x 00:29:39.342 22:28:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:39.342 22:28:34 -- host/discovery.sh@74 -- # notification_count=2 00:29:39.342 22:28:34 -- host/discovery.sh@75 -- # notify_id=4 00:29:39.342 22:28:34 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:29:39.342 22:28:34 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:39.342 22:28:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:39.342 22:28:34 -- common/autotest_common.sh@10 -- # set +x 00:29:40.276 [2024-07-24 22:28:35.324189] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:40.276 [2024-07-24 22:28:35.324206] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:40.276 [2024-07-24 22:28:35.324218] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:40.535 [2024-07-24 22:28:35.451611] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:29:40.535 [2024-07-24 22:28:35.558614] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:40.535 [2024-07-24 22:28:35.558640] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:40.535 22:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.535 22:28:35 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:40.535 22:28:35 -- common/autotest_common.sh@640 -- # local es=0 00:29:40.535 22:28:35 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:40.535 22:28:35 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:40.535 22:28:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:40.535 22:28:35 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:40.535 22:28:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:40.535 22:28:35 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:40.535 22:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.535 22:28:35 -- common/autotest_common.sh@10 -- # set +x 00:29:40.535 request: 00:29:40.535 { 00:29:40.535 "name": "nvme", 00:29:40.535 "trtype": "tcp", 00:29:40.535 "traddr": "10.0.0.2", 00:29:40.535 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:40.535 "adrfam": "ipv4", 00:29:40.535 "trsvcid": "8009", 00:29:40.535 "wait_for_attach": true, 00:29:40.535 "method": "bdev_nvme_start_discovery", 00:29:40.535 "req_id": 1 00:29:40.535 } 00:29:40.535 Got JSON-RPC error response 00:29:40.535 response: 00:29:40.535 { 00:29:40.536 "code": -17, 00:29:40.536 "message": "File exists" 00:29:40.536 } 00:29:40.536 22:28:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:40.536 22:28:35 -- common/autotest_common.sh@643 -- # es=1 00:29:40.536 22:28:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:40.536 22:28:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:40.536 22:28:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:40.536 22:28:35 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:29:40.536 22:28:35 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:40.536 22:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.536 22:28:35 -- common/autotest_common.sh@10 -- # set +x 00:29:40.536 22:28:35 -- host/discovery.sh@67 -- # sort 00:29:40.536 22:28:35 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:40.536 22:28:35 -- host/discovery.sh@67 -- # xargs 00:29:40.536 22:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.536 22:28:35 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:29:40.536 22:28:35 -- host/discovery.sh@147 -- # get_bdev_list 00:29:40.536 22:28:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.536 22:28:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:40.536 22:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.536 22:28:35 -- host/discovery.sh@55 -- # sort 00:29:40.536 22:28:35 -- common/autotest_common.sh@10 -- # set +x 00:29:40.536 22:28:35 -- host/discovery.sh@55 -- # xargs 00:29:40.536 22:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.795 22:28:35 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:40.795 22:28:35 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:40.795 22:28:35 -- common/autotest_common.sh@640 -- # local es=0 00:29:40.795 22:28:35 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:40.795 22:28:35 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:40.795 22:28:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:40.795 22:28:35 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:40.795 22:28:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:40.795 22:28:35 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:29:40.795 22:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.795 22:28:35 -- common/autotest_common.sh@10 -- # set +x 00:29:40.795 request: 00:29:40.795 { 00:29:40.795 "name": "nvme_second", 00:29:40.795 "trtype": "tcp", 00:29:40.795 "traddr": "10.0.0.2", 00:29:40.795 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:40.795 "adrfam": "ipv4", 00:29:40.795 "trsvcid": "8009", 00:29:40.795 "wait_for_attach": true, 00:29:40.795 "method": "bdev_nvme_start_discovery", 00:29:40.795 "req_id": 1 00:29:40.795 } 00:29:40.795 Got JSON-RPC error response 00:29:40.795 response: 00:29:40.795 { 00:29:40.795 "code": -17, 00:29:40.795 "message": "File exists" 00:29:40.795 } 00:29:40.795 22:28:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:40.795 22:28:35 -- common/autotest_common.sh@643 -- # es=1 00:29:40.795 22:28:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:40.795 22:28:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:40.795 22:28:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:40.795 22:28:35 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:29:40.795 22:28:35 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:40.795 22:28:35 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:40.795 22:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.795 22:28:35 -- host/discovery.sh@67 -- # sort 00:29:40.795 22:28:35 -- common/autotest_common.sh@10 -- # set +x 00:29:40.795 22:28:35 -- host/discovery.sh@67 -- # xargs 00:29:40.795 22:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.795 22:28:35 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:29:40.795 22:28:35 -- host/discovery.sh@153 -- # get_bdev_list 00:29:40.795 22:28:35 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.795 22:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.795 22:28:35 -- common/autotest_common.sh@10 -- # set +x 00:29:40.795 22:28:35 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:29:40.795 22:28:35 -- host/discovery.sh@55 -- # sort 00:29:40.795 22:28:35 -- host/discovery.sh@55 -- # xargs 00:29:40.795 22:28:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:40.795 22:28:35 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:29:40.795 22:28:35 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:40.795 22:28:35 -- common/autotest_common.sh@640 -- # local es=0 00:29:40.795 22:28:35 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:40.795 22:28:35 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:29:40.795 22:28:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:40.795 22:28:35 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:29:40.795 22:28:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:40.795 22:28:35 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:29:40.795 22:28:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:40.795 22:28:35 -- common/autotest_common.sh@10 -- # set +x 00:29:41.731 [2024-07-24 22:28:36.811515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.731 [2024-07-24 22:28:36.812016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.731 [2024-07-24 22:28:36.812027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c5870 with addr=10.0.0.2, port=8010 00:29:41.731 [2024-07-24 22:28:36.812037] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:41.731 [2024-07-24 22:28:36.812046] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:41.731 [2024-07-24 22:28:36.812053] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:43.107 [2024-07-24 22:28:37.813920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.107 [2024-07-24 22:28:37.814397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.107 [2024-07-24 22:28:37.814408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14c01a0 with addr=10.0.0.2, port=8010 00:29:43.107 [2024-07-24 22:28:37.814418] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:43.107 [2024-07-24 22:28:37.814423] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:43.107 [2024-07-24 22:28:37.814429] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:29:44.043 [2024-07-24 22:28:38.815892] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:29:44.043 request: 00:29:44.043 { 00:29:44.043 "name": "nvme_second", 00:29:44.043 "trtype": "tcp", 00:29:44.043 "traddr": "10.0.0.2", 00:29:44.043 "hostnqn": "nqn.2021-12.io.spdk:test", 00:29:44.043 "adrfam": "ipv4", 00:29:44.043 "trsvcid": "8010", 00:29:44.043 "attach_timeout_ms": 3000, 00:29:44.043 "method": "bdev_nvme_start_discovery", 00:29:44.043 "req_id": 1 00:29:44.043 } 00:29:44.043 Got JSON-RPC error response 00:29:44.043 response: 00:29:44.043 { 00:29:44.043 "code": -110, 00:29:44.043 "message": "Connection timed out" 00:29:44.043 } 00:29:44.043 22:28:38 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:29:44.043 22:28:38 -- common/autotest_common.sh@643 -- # es=1 00:29:44.043 22:28:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:44.043 22:28:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:44.043 22:28:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:44.043 22:28:38 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:29:44.043 22:28:38 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:44.043 22:28:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:44.043 22:28:38 -- common/autotest_common.sh@10 -- # set +x 00:29:44.043 22:28:38 -- host/discovery.sh@67 -- # xargs 00:29:44.043 22:28:38 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:29:44.043 22:28:38 -- host/discovery.sh@67 -- # sort 00:29:44.043 22:28:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:44.043 22:28:38 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:29:44.043 22:28:38 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:29:44.043 22:28:38 -- host/discovery.sh@162 -- # kill 3718455 00:29:44.043 22:28:38 -- host/discovery.sh@163 -- # nvmftestfini 00:29:44.043 22:28:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:44.043 22:28:38 -- nvmf/common.sh@116 -- # sync 00:29:44.043 22:28:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:44.043 22:28:38 -- nvmf/common.sh@119 -- # set +e 00:29:44.043 22:28:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:44.043 22:28:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:44.043 rmmod nvme_tcp 00:29:44.043 rmmod nvme_fabrics 00:29:44.043 rmmod nvme_keyring 00:29:44.043 22:28:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:44.043 22:28:38 -- nvmf/common.sh@123 -- # set -e 00:29:44.043 22:28:38 -- nvmf/common.sh@124 -- # return 0 00:29:44.043 22:28:38 -- nvmf/common.sh@477 -- # '[' -n 3718205 ']' 00:29:44.043 22:28:38 -- nvmf/common.sh@478 -- # killprocess 3718205 00:29:44.043 22:28:38 -- common/autotest_common.sh@926 -- # '[' -z 3718205 ']' 00:29:44.043 22:28:38 -- common/autotest_common.sh@930 -- # kill -0 3718205 00:29:44.043 22:28:38 -- common/autotest_common.sh@931 -- # uname 00:29:44.043 22:28:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:44.043 22:28:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3718205 00:29:44.043 22:28:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:44.043 22:28:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:44.043 22:28:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3718205' 00:29:44.043 killing process with pid 3718205 00:29:44.043 22:28:38 -- common/autotest_common.sh@945 -- # kill 3718205 00:29:44.043 22:28:38 -- common/autotest_common.sh@950 -- # wait 3718205 00:29:44.043 22:28:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:44.043 22:28:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:44.043 22:28:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:44.043 22:28:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:44.043 22:28:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:44.043 22:28:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.043 22:28:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.043 22:28:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.575 22:28:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:46.575 00:29:46.575 real 0m20.572s 00:29:46.575 user 0m27.728s 00:29:46.575 sys 0m5.523s 00:29:46.575 22:28:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.575 22:28:41 -- common/autotest_common.sh@10 -- # set +x 00:29:46.575 ************************************ 00:29:46.575 END TEST nvmf_discovery 00:29:46.575 ************************************ 00:29:46.575 22:28:41 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:46.575 22:28:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:46.575 22:28:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.575 22:28:41 -- common/autotest_common.sh@10 -- # set +x 00:29:46.575 ************************************ 00:29:46.575 START TEST nvmf_discovery_remove_ifc 00:29:46.575 ************************************ 00:29:46.575 22:28:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:29:46.575 * Looking for test storage... 00:29:46.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:46.575 22:28:41 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.575 22:28:41 -- nvmf/common.sh@7 -- # uname -s 00:29:46.575 22:28:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.575 22:28:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.575 22:28:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.575 22:28:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.575 22:28:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.575 22:28:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.575 22:28:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.575 22:28:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.575 22:28:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.575 22:28:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.575 22:28:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.575 22:28:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:46.575 22:28:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.575 22:28:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.575 22:28:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.575 22:28:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.575 22:28:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.575 22:28:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.575 22:28:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.575 22:28:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.575 22:28:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.575 22:28:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.575 22:28:41 -- paths/export.sh@5 -- # export PATH 00:29:46.575 22:28:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.575 22:28:41 -- nvmf/common.sh@46 -- # : 0 00:29:46.575 22:28:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:46.575 22:28:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:46.575 22:28:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:46.575 22:28:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.575 22:28:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.575 22:28:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:46.575 22:28:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:46.575 22:28:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:46.575 22:28:41 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:29:46.575 22:28:41 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:29:46.575 22:28:41 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:29:46.575 22:28:41 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:29:46.575 22:28:41 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:29:46.575 22:28:41 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:29:46.575 22:28:41 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:29:46.575 22:28:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:46.575 22:28:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.575 22:28:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:46.575 22:28:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:46.575 22:28:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:46.575 22:28:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.575 22:28:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.575 22:28:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.575 22:28:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:46.575 22:28:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:46.575 22:28:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:46.575 22:28:41 -- common/autotest_common.sh@10 -- # set +x 00:29:51.870 22:28:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:51.870 22:28:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:51.870 22:28:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:51.870 22:28:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:51.870 22:28:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:51.870 22:28:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:51.870 22:28:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:51.871 22:28:46 -- nvmf/common.sh@294 -- # net_devs=() 00:29:51.871 22:28:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:51.871 22:28:46 -- nvmf/common.sh@295 -- # e810=() 00:29:51.871 22:28:46 -- nvmf/common.sh@295 -- # local -ga e810 00:29:51.871 22:28:46 -- nvmf/common.sh@296 -- # x722=() 00:29:51.871 22:28:46 -- nvmf/common.sh@296 -- # local -ga x722 00:29:51.871 22:28:46 -- nvmf/common.sh@297 -- # mlx=() 00:29:51.871 22:28:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:51.871 22:28:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:51.871 22:28:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:51.871 22:28:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:51.871 22:28:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:51.871 22:28:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:51.871 22:28:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:51.871 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:51.871 22:28:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:51.871 22:28:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:51.871 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:51.871 22:28:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:51.871 22:28:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:51.871 22:28:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.871 22:28:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:51.871 22:28:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.871 22:28:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:51.871 Found net devices under 0000:86:00.0: cvl_0_0 00:29:51.871 22:28:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.871 22:28:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:51.871 22:28:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:51.871 22:28:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:51.871 22:28:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:51.871 22:28:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:51.871 Found net devices under 0000:86:00.1: cvl_0_1 00:29:51.871 22:28:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:51.871 22:28:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:51.871 22:28:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:51.871 22:28:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:51.871 22:28:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.871 22:28:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:51.871 22:28:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:51.871 22:28:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:51.871 22:28:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:51.871 22:28:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:51.871 22:28:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:51.871 22:28:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:51.871 22:28:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.871 22:28:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:51.871 22:28:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:51.871 22:28:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:51.871 22:28:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:51.871 22:28:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:51.871 22:28:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:51.871 22:28:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:51.871 22:28:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:51.871 22:28:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:51.871 22:28:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:51.871 22:28:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:51.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:29:51.871 00:29:51.871 --- 10.0.0.2 ping statistics --- 00:29:51.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.871 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:29:51.871 22:28:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:51.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:29:51.871 00:29:51.871 --- 10.0.0.1 ping statistics --- 00:29:51.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.871 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:29:51.871 22:28:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.871 22:28:46 -- nvmf/common.sh@410 -- # return 0 00:29:51.871 22:28:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:51.871 22:28:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.871 22:28:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:51.871 22:28:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.871 22:28:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:51.871 22:28:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:51.871 22:28:46 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:29:51.871 22:28:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:51.871 22:28:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:51.871 22:28:46 -- common/autotest_common.sh@10 -- # set +x 00:29:51.871 22:28:46 -- nvmf/common.sh@469 -- # nvmfpid=3724023 00:29:51.871 22:28:46 -- nvmf/common.sh@470 -- # waitforlisten 3724023 00:29:51.871 22:28:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:29:51.871 22:28:46 -- common/autotest_common.sh@819 -- # '[' -z 3724023 ']' 00:29:51.871 22:28:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.871 22:28:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:51.871 22:28:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.871 22:28:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:51.871 22:28:46 -- common/autotest_common.sh@10 -- # set +x 00:29:51.871 [2024-07-24 22:28:46.761262] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:51.871 [2024-07-24 22:28:46.761306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.871 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.871 [2024-07-24 22:28:46.818954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.871 [2024-07-24 22:28:46.856024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:51.871 [2024-07-24 22:28:46.856167] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.871 [2024-07-24 22:28:46.856176] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.871 [2024-07-24 22:28:46.856182] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.871 [2024-07-24 22:28:46.856199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.439 22:28:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:52.439 22:28:47 -- common/autotest_common.sh@852 -- # return 0 00:29:52.439 22:28:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:52.439 22:28:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:52.439 22:28:47 -- common/autotest_common.sh@10 -- # set +x 00:29:52.698 22:28:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.698 22:28:47 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:29:52.698 22:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.698 22:28:47 -- common/autotest_common.sh@10 -- # set +x 00:29:52.698 [2024-07-24 22:28:47.597218] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.698 [2024-07-24 22:28:47.605370] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:52.698 null0 00:29:52.698 [2024-07-24 22:28:47.637354] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.698 22:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.698 22:28:47 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3724057 00:29:52.698 22:28:47 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3724057 /tmp/host.sock 00:29:52.698 22:28:47 -- common/autotest_common.sh@819 -- # '[' -z 3724057 ']' 00:29:52.698 22:28:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:29:52.698 22:28:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:52.698 22:28:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:52.698 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:52.698 22:28:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:52.698 22:28:47 -- common/autotest_common.sh@10 -- # set +x 00:29:52.698 22:28:47 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:29:52.698 [2024-07-24 22:28:47.700979] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:29:52.698 [2024-07-24 22:28:47.701024] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724057 ] 00:29:52.698 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.698 [2024-07-24 22:28:47.756052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.698 [2024-07-24 22:28:47.794879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:52.698 [2024-07-24 22:28:47.794998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.957 22:28:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:52.957 22:28:47 -- common/autotest_common.sh@852 -- # return 0 00:29:52.957 22:28:47 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:52.957 22:28:47 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:29:52.957 22:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.957 22:28:47 -- common/autotest_common.sh@10 -- # set +x 00:29:52.957 22:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.957 22:28:47 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:29:52.957 22:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.957 22:28:47 -- common/autotest_common.sh@10 -- # set +x 00:29:52.957 22:28:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.957 22:28:47 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:29:52.957 22:28:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.957 22:28:47 -- common/autotest_common.sh@10 -- # set +x 00:29:53.892 [2024-07-24 22:28:48.924389] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:53.892 [2024-07-24 22:28:48.924413] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:53.892 [2024-07-24 22:28:48.924426] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:53.892 [2024-07-24 22:28:49.012675] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:29:54.151 [2024-07-24 22:28:49.200261] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:54.151 [2024-07-24 22:28:49.200296] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:54.151 [2024-07-24 22:28:49.200318] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:54.151 [2024-07-24 22:28:49.200330] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:29:54.151 [2024-07-24 22:28:49.200347] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:54.151 22:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:54.151 22:28:49 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:29:54.151 22:28:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:54.151 [2024-07-24 22:28:49.204594] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x17967c0 was disconnected and freed. delete nvme_qpair. 00:29:54.151 22:28:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.151 22:28:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:54.151 22:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:54.151 22:28:49 -- common/autotest_common.sh@10 -- # set +x 00:29:54.151 22:28:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:54.151 22:28:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:54.151 22:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:54.151 22:28:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:29:54.151 22:28:49 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:29:54.151 22:28:49 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:29:54.409 22:28:49 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:29:54.409 22:28:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:54.409 22:28:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:54.409 22:28:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:54.410 22:28:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:54.410 22:28:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:54.410 22:28:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:54.410 22:28:49 -- common/autotest_common.sh@10 -- # set +x 00:29:54.410 22:28:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:54.410 22:28:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:54.410 22:28:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:55.345 22:28:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:55.345 22:28:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:55.345 22:28:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:55.345 22:28:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:55.345 22:28:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.345 22:28:50 -- common/autotest_common.sh@10 -- # set +x 00:29:55.345 22:28:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:55.345 22:28:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.346 22:28:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:55.346 22:28:50 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:56.722 22:28:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:56.722 22:28:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:56.722 22:28:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:56.722 22:28:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.722 22:28:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:56.722 22:28:51 -- common/autotest_common.sh@10 -- # set +x 00:29:56.722 22:28:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:56.722 22:28:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.722 22:28:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:56.722 22:28:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:57.656 22:28:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:57.656 22:28:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:57.656 22:28:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:57.656 22:28:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:57.657 22:28:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:57.657 22:28:52 -- common/autotest_common.sh@10 -- # set +x 00:29:57.657 22:28:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:57.657 22:28:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:57.657 22:28:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:57.657 22:28:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:58.591 22:28:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:58.591 22:28:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:58.591 22:28:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:58.591 22:28:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:58.591 22:28:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:58.591 22:28:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:58.591 22:28:53 -- common/autotest_common.sh@10 -- # set +x 00:29:58.591 22:28:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:58.591 22:28:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:58.591 22:28:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:59.527 22:28:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:59.527 22:28:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:59.527 22:28:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:59.527 22:28:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:59.527 22:28:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:59.527 22:28:54 -- common/autotest_common.sh@10 -- # set +x 00:29:59.527 22:28:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:59.527 22:28:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:59.527 [2024-07-24 22:28:54.641551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:29:59.527 [2024-07-24 22:28:54.641590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.527 [2024-07-24 22:28:54.641606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.527 [2024-07-24 22:28:54.641615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.527 [2024-07-24 22:28:54.641622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.527 [2024-07-24 22:28:54.641629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.527 [2024-07-24 22:28:54.641636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.527 [2024-07-24 22:28:54.641643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.527 [2024-07-24 22:28:54.641650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.527 [2024-07-24 22:28:54.641657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:59.527 [2024-07-24 22:28:54.641663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:59.527 [2024-07-24 22:28:54.641670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175da90 is same with the state(5) to be set 00:29:59.527 [2024-07-24 22:28:54.651572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175da90 (9): Bad file descriptor 00:29:59.527 22:28:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:59.527 22:28:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:59.785 [2024-07-24 22:28:54.661612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:00.719 22:28:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:00.719 22:28:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:00.719 22:28:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:00.719 22:28:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:00.719 22:28:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:00.719 22:28:55 -- common/autotest_common.sh@10 -- # set +x 00:30:00.719 22:28:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:00.719 [2024-07-24 22:28:55.682061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:01.653 [2024-07-24 22:28:56.706069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:30:01.653 [2024-07-24 22:28:56.706114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x175da90 with addr=10.0.0.2, port=4420 00:30:01.653 [2024-07-24 22:28:56.706131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175da90 is same with the state(5) to be set 00:30:01.653 [2024-07-24 22:28:56.706157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:01.654 [2024-07-24 22:28:56.706167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:01.654 [2024-07-24 22:28:56.706175] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:01.654 [2024-07-24 22:28:56.706186] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:30:01.654 [2024-07-24 22:28:56.706568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175da90 (9): Bad file descriptor 00:30:01.654 [2024-07-24 22:28:56.706594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.654 [2024-07-24 22:28:56.706618] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:30:01.654 [2024-07-24 22:28:56.706643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.654 [2024-07-24 22:28:56.706661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.654 [2024-07-24 22:28:56.706674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.654 [2024-07-24 22:28:56.706683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.654 [2024-07-24 22:28:56.706693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.654 [2024-07-24 22:28:56.706703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.654 [2024-07-24 22:28:56.706713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.654 [2024-07-24 22:28:56.706722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.654 [2024-07-24 22:28:56.706732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:30:01.654 [2024-07-24 22:28:56.706741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:01.654 [2024-07-24 22:28:56.706750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:30:01.654 [2024-07-24 22:28:56.707157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x175cf80 (9): Bad file descriptor 00:30:01.654 [2024-07-24 22:28:56.708172] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:30:01.654 [2024-07-24 22:28:56.708186] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:30:01.654 22:28:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.654 22:28:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:30:01.654 22:28:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:03.030 22:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:03.030 22:28:57 -- common/autotest_common.sh@10 -- # set +x 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:03.030 22:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:03.030 22:28:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:03.030 22:28:57 -- common/autotest_common.sh@10 -- # set +x 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:03.030 22:28:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:03.030 22:28:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:03.596 [2024-07-24 22:28:58.719864] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:03.596 [2024-07-24 22:28:58.719882] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:03.596 [2024-07-24 22:28:58.719896] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:03.854 [2024-07-24 22:28:58.849285] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:30:03.854 22:28:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:03.854 22:28:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:03.854 22:28:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:03.854 22:28:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:03.854 22:28:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:03.854 22:28:58 -- common/autotest_common.sh@10 -- # set +x 00:30:03.854 22:28:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:03.854 22:28:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:03.854 22:28:58 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:30:03.854 22:28:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:30:04.113 [2024-07-24 22:28:59.073483] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:30:04.113 [2024-07-24 22:28:59.073515] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:30:04.113 [2024-07-24 22:28:59.073531] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:30:04.113 [2024-07-24 22:28:59.073543] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:30:04.113 [2024-07-24 22:28:59.073550] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:04.113 [2024-07-24 22:28:59.079740] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x176ac90 was disconnected and freed. delete nvme_qpair. 00:30:05.052 22:28:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:30:05.052 22:28:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:05.052 22:28:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:30:05.052 22:28:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.052 22:28:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:30:05.052 22:28:59 -- common/autotest_common.sh@10 -- # set +x 00:30:05.052 22:28:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:30:05.052 22:28:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.052 22:29:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:30:05.052 22:29:00 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:30:05.052 22:29:00 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3724057 00:30:05.052 22:29:00 -- common/autotest_common.sh@926 -- # '[' -z 3724057 ']' 00:30:05.052 22:29:00 -- common/autotest_common.sh@930 -- # kill -0 3724057 00:30:05.052 22:29:00 -- common/autotest_common.sh@931 -- # uname 00:30:05.052 22:29:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:05.052 22:29:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3724057 00:30:05.052 22:29:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:05.052 22:29:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:05.052 22:29:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3724057' 00:30:05.052 killing process with pid 3724057 00:30:05.052 22:29:00 -- common/autotest_common.sh@945 -- # kill 3724057 00:30:05.052 22:29:00 -- common/autotest_common.sh@950 -- # wait 3724057 00:30:05.311 22:29:00 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:30:05.311 22:29:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:05.311 22:29:00 -- nvmf/common.sh@116 -- # sync 00:30:05.311 22:29:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:05.311 22:29:00 -- nvmf/common.sh@119 -- # set +e 00:30:05.311 22:29:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:05.311 22:29:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:05.311 rmmod nvme_tcp 00:30:05.311 rmmod nvme_fabrics 00:30:05.311 rmmod nvme_keyring 00:30:05.311 22:29:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:05.311 22:29:00 -- nvmf/common.sh@123 -- # set -e 00:30:05.311 22:29:00 -- nvmf/common.sh@124 -- # return 0 00:30:05.311 22:29:00 -- nvmf/common.sh@477 -- # '[' -n 3724023 ']' 00:30:05.311 22:29:00 -- nvmf/common.sh@478 -- # killprocess 3724023 00:30:05.311 22:29:00 -- common/autotest_common.sh@926 -- # '[' -z 3724023 ']' 00:30:05.311 22:29:00 -- common/autotest_common.sh@930 -- # kill -0 3724023 00:30:05.311 22:29:00 -- common/autotest_common.sh@931 -- # uname 00:30:05.311 22:29:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:05.311 22:29:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3724023 00:30:05.311 22:29:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:05.311 22:29:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:05.311 22:29:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3724023' 00:30:05.311 killing process with pid 3724023 00:30:05.311 22:29:00 -- common/autotest_common.sh@945 -- # kill 3724023 00:30:05.311 22:29:00 -- common/autotest_common.sh@950 -- # wait 3724023 00:30:05.569 22:29:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:05.569 22:29:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:05.569 22:29:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:05.569 22:29:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:05.569 22:29:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:05.569 22:29:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.569 22:29:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.569 22:29:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.511 22:29:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:07.511 00:30:07.511 real 0m21.303s 00:30:07.511 user 0m26.127s 00:30:07.511 sys 0m5.225s 00:30:07.511 22:29:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.511 22:29:02 -- common/autotest_common.sh@10 -- # set +x 00:30:07.511 ************************************ 00:30:07.511 END TEST nvmf_discovery_remove_ifc 00:30:07.511 ************************************ 00:30:07.511 22:29:02 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:30:07.511 22:29:02 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:07.511 22:29:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:07.511 22:29:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:07.511 22:29:02 -- common/autotest_common.sh@10 -- # set +x 00:30:07.511 ************************************ 00:30:07.511 START TEST nvmf_digest 00:30:07.511 ************************************ 00:30:07.511 22:29:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:30:07.770 * Looking for test storage... 00:30:07.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.771 22:29:02 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.771 22:29:02 -- nvmf/common.sh@7 -- # uname -s 00:30:07.771 22:29:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.771 22:29:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.771 22:29:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.771 22:29:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.771 22:29:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.771 22:29:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.771 22:29:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.771 22:29:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.771 22:29:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.771 22:29:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.771 22:29:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:07.771 22:29:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:07.771 22:29:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.771 22:29:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.771 22:29:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.771 22:29:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.771 22:29:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.771 22:29:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.771 22:29:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.771 22:29:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.771 22:29:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.771 22:29:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.771 22:29:02 -- paths/export.sh@5 -- # export PATH 00:30:07.771 22:29:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.771 22:29:02 -- nvmf/common.sh@46 -- # : 0 00:30:07.771 22:29:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:07.771 22:29:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:07.771 22:29:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:07.771 22:29:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.771 22:29:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.771 22:29:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:07.771 22:29:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:07.771 22:29:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:07.771 22:29:02 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:07.771 22:29:02 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:30:07.771 22:29:02 -- host/digest.sh@16 -- # runtime=2 00:30:07.771 22:29:02 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:30:07.771 22:29:02 -- host/digest.sh@132 -- # nvmftestinit 00:30:07.771 22:29:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:07.771 22:29:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.771 22:29:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:07.771 22:29:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:07.771 22:29:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:07.771 22:29:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.771 22:29:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:07.771 22:29:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.771 22:29:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:07.771 22:29:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:07.771 22:29:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:07.771 22:29:02 -- common/autotest_common.sh@10 -- # set +x 00:30:13.044 22:29:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:13.044 22:29:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:13.044 22:29:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:13.044 22:29:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:13.044 22:29:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:13.044 22:29:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:13.044 22:29:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:13.044 22:29:07 -- nvmf/common.sh@294 -- # net_devs=() 00:30:13.044 22:29:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:13.044 22:29:07 -- nvmf/common.sh@295 -- # e810=() 00:30:13.045 22:29:07 -- nvmf/common.sh@295 -- # local -ga e810 00:30:13.045 22:29:07 -- nvmf/common.sh@296 -- # x722=() 00:30:13.045 22:29:07 -- nvmf/common.sh@296 -- # local -ga x722 00:30:13.045 22:29:07 -- nvmf/common.sh@297 -- # mlx=() 00:30:13.045 22:29:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:13.045 22:29:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.045 22:29:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:13.045 22:29:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:13.045 22:29:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:13.045 22:29:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:13.045 22:29:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:13.045 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:13.045 22:29:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:13.045 22:29:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:13.045 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:13.045 22:29:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:13.045 22:29:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:13.045 22:29:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.045 22:29:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:13.045 22:29:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.045 22:29:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:13.045 Found net devices under 0000:86:00.0: cvl_0_0 00:30:13.045 22:29:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.045 22:29:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:13.045 22:29:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.045 22:29:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:13.045 22:29:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.045 22:29:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:13.045 Found net devices under 0000:86:00.1: cvl_0_1 00:30:13.045 22:29:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.045 22:29:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:13.045 22:29:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:13.045 22:29:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:13.045 22:29:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.045 22:29:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.045 22:29:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.045 22:29:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:13.045 22:29:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.045 22:29:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.045 22:29:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:13.045 22:29:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.045 22:29:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.045 22:29:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:13.045 22:29:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:13.045 22:29:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.045 22:29:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.045 22:29:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.045 22:29:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.045 22:29:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:13.045 22:29:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.045 22:29:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.045 22:29:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.045 22:29:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:13.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:30:13.045 00:30:13.045 --- 10.0.0.2 ping statistics --- 00:30:13.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.045 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:30:13.045 22:29:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:30:13.045 00:30:13.045 --- 10.0.0.1 ping statistics --- 00:30:13.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.045 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:30:13.045 22:29:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.045 22:29:07 -- nvmf/common.sh@410 -- # return 0 00:30:13.045 22:29:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:13.045 22:29:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.045 22:29:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:13.045 22:29:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.045 22:29:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:13.045 22:29:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:13.045 22:29:07 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:13.045 22:29:07 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:30:13.045 22:29:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:13.045 22:29:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:13.045 22:29:07 -- common/autotest_common.sh@10 -- # set +x 00:30:13.045 ************************************ 00:30:13.045 START TEST nvmf_digest_clean 00:30:13.045 ************************************ 00:30:13.045 22:29:07 -- common/autotest_common.sh@1104 -- # run_digest 00:30:13.045 22:29:07 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:30:13.045 22:29:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:13.045 22:29:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:13.045 22:29:07 -- common/autotest_common.sh@10 -- # set +x 00:30:13.045 22:29:07 -- nvmf/common.sh@469 -- # nvmfpid=3729640 00:30:13.045 22:29:07 -- nvmf/common.sh@470 -- # waitforlisten 3729640 00:30:13.045 22:29:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:13.045 22:29:07 -- common/autotest_common.sh@819 -- # '[' -z 3729640 ']' 00:30:13.045 22:29:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.045 22:29:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:13.045 22:29:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.045 22:29:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:13.045 22:29:07 -- common/autotest_common.sh@10 -- # set +x 00:30:13.045 [2024-07-24 22:29:07.988038] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:13.045 [2024-07-24 22:29:07.988089] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.045 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.045 [2024-07-24 22:29:08.041783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.045 [2024-07-24 22:29:08.081453] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:13.045 [2024-07-24 22:29:08.081563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.045 [2024-07-24 22:29:08.081571] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.045 [2024-07-24 22:29:08.081577] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.045 [2024-07-24 22:29:08.081594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.045 22:29:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:13.045 22:29:08 -- common/autotest_common.sh@852 -- # return 0 00:30:13.045 22:29:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:13.045 22:29:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:13.045 22:29:08 -- common/autotest_common.sh@10 -- # set +x 00:30:13.045 22:29:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:13.045 22:29:08 -- host/digest.sh@120 -- # common_target_config 00:30:13.045 22:29:08 -- host/digest.sh@43 -- # rpc_cmd 00:30:13.045 22:29:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:13.045 22:29:08 -- common/autotest_common.sh@10 -- # set +x 00:30:13.305 null0 00:30:13.305 [2024-07-24 22:29:08.243148] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.305 [2024-07-24 22:29:08.267312] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:13.305 22:29:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:13.305 22:29:08 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:30:13.305 22:29:08 -- host/digest.sh@77 -- # local rw bs qd 00:30:13.305 22:29:08 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:13.305 22:29:08 -- host/digest.sh@80 -- # rw=randread 00:30:13.305 22:29:08 -- host/digest.sh@80 -- # bs=4096 00:30:13.305 22:29:08 -- host/digest.sh@80 -- # qd=128 00:30:13.305 22:29:08 -- host/digest.sh@82 -- # bperfpid=3729782 00:30:13.305 22:29:08 -- host/digest.sh@83 -- # waitforlisten 3729782 /var/tmp/bperf.sock 00:30:13.305 22:29:08 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:13.305 22:29:08 -- common/autotest_common.sh@819 -- # '[' -z 3729782 ']' 00:30:13.305 22:29:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:13.305 22:29:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:13.305 22:29:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:13.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:13.305 22:29:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:13.305 22:29:08 -- common/autotest_common.sh@10 -- # set +x 00:30:13.305 [2024-07-24 22:29:08.315307] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:13.305 [2024-07-24 22:29:08.315348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3729782 ] 00:30:13.305 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.305 [2024-07-24 22:29:08.367897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.305 [2024-07-24 22:29:08.405459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.564 22:29:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:13.564 22:29:08 -- common/autotest_common.sh@852 -- # return 0 00:30:13.564 22:29:08 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:13.564 22:29:08 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:13.564 22:29:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:13.564 22:29:08 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:13.564 22:29:08 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:14.132 nvme0n1 00:30:14.132 22:29:09 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:14.132 22:29:09 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:14.132 Running I/O for 2 seconds... 00:30:16.667 00:30:16.667 Latency(us) 00:30:16.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.667 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:16.667 nvme0n1 : 2.00 27905.34 109.01 0.00 0.00 4582.07 2208.28 26898.25 00:30:16.667 =================================================================================================================== 00:30:16.667 Total : 27905.34 109.01 0.00 0.00 4582.07 2208.28 26898.25 00:30:16.667 0 00:30:16.667 22:29:11 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:16.667 22:29:11 -- host/digest.sh@92 -- # get_accel_stats 00:30:16.667 22:29:11 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:16.667 22:29:11 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:16.667 | select(.opcode=="crc32c") 00:30:16.667 | "\(.module_name) \(.executed)"' 00:30:16.667 22:29:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:16.667 22:29:11 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:16.667 22:29:11 -- host/digest.sh@93 -- # exp_module=software 00:30:16.667 22:29:11 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:16.667 22:29:11 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:16.667 22:29:11 -- host/digest.sh@97 -- # killprocess 3729782 00:30:16.667 22:29:11 -- common/autotest_common.sh@926 -- # '[' -z 3729782 ']' 00:30:16.667 22:29:11 -- common/autotest_common.sh@930 -- # kill -0 3729782 00:30:16.667 22:29:11 -- common/autotest_common.sh@931 -- # uname 00:30:16.667 22:29:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:16.667 22:29:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3729782 00:30:16.667 22:29:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:16.667 22:29:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:16.667 22:29:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3729782' 00:30:16.668 killing process with pid 3729782 00:30:16.668 22:29:11 -- common/autotest_common.sh@945 -- # kill 3729782 00:30:16.668 Received shutdown signal, test time was about 2.000000 seconds 00:30:16.668 00:30:16.668 Latency(us) 00:30:16.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.668 =================================================================================================================== 00:30:16.668 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:16.668 22:29:11 -- common/autotest_common.sh@950 -- # wait 3729782 00:30:16.668 22:29:11 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:30:16.668 22:29:11 -- host/digest.sh@77 -- # local rw bs qd 00:30:16.668 22:29:11 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:16.668 22:29:11 -- host/digest.sh@80 -- # rw=randread 00:30:16.668 22:29:11 -- host/digest.sh@80 -- # bs=131072 00:30:16.668 22:29:11 -- host/digest.sh@80 -- # qd=16 00:30:16.668 22:29:11 -- host/digest.sh@82 -- # bperfpid=3730264 00:30:16.668 22:29:11 -- host/digest.sh@83 -- # waitforlisten 3730264 /var/tmp/bperf.sock 00:30:16.668 22:29:11 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:16.668 22:29:11 -- common/autotest_common.sh@819 -- # '[' -z 3730264 ']' 00:30:16.668 22:29:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:16.668 22:29:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:16.668 22:29:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:16.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:16.668 22:29:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:16.668 22:29:11 -- common/autotest_common.sh@10 -- # set +x 00:30:16.668 [2024-07-24 22:29:11.639618] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:16.668 [2024-07-24 22:29:11.639665] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730264 ] 00:30:16.668 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:16.668 Zero copy mechanism will not be used. 00:30:16.668 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.668 [2024-07-24 22:29:11.692180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.668 [2024-07-24 22:29:11.730937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.668 22:29:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:16.668 22:29:11 -- common/autotest_common.sh@852 -- # return 0 00:30:16.668 22:29:11 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:16.668 22:29:11 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:16.668 22:29:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:16.927 22:29:11 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:16.927 22:29:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:17.495 nvme0n1 00:30:17.495 22:29:12 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:17.495 22:29:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:17.495 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:17.495 Zero copy mechanism will not be used. 00:30:17.495 Running I/O for 2 seconds... 00:30:19.403 00:30:19.403 Latency(us) 00:30:19.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.403 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:19.403 nvme0n1 : 2.01 2144.98 268.12 0.00 0.00 7456.91 6496.61 25530.55 00:30:19.403 =================================================================================================================== 00:30:19.403 Total : 2144.98 268.12 0.00 0.00 7456.91 6496.61 25530.55 00:30:19.403 0 00:30:19.403 22:29:14 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:19.403 22:29:14 -- host/digest.sh@92 -- # get_accel_stats 00:30:19.403 22:29:14 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:19.403 22:29:14 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:19.403 | select(.opcode=="crc32c") 00:30:19.403 | "\(.module_name) \(.executed)"' 00:30:19.403 22:29:14 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:19.662 22:29:14 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:19.662 22:29:14 -- host/digest.sh@93 -- # exp_module=software 00:30:19.662 22:29:14 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:19.662 22:29:14 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:19.663 22:29:14 -- host/digest.sh@97 -- # killprocess 3730264 00:30:19.663 22:29:14 -- common/autotest_common.sh@926 -- # '[' -z 3730264 ']' 00:30:19.663 22:29:14 -- common/autotest_common.sh@930 -- # kill -0 3730264 00:30:19.663 22:29:14 -- common/autotest_common.sh@931 -- # uname 00:30:19.663 22:29:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:19.663 22:29:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3730264 00:30:19.663 22:29:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:19.663 22:29:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:19.663 22:29:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3730264' 00:30:19.663 killing process with pid 3730264 00:30:19.663 22:29:14 -- common/autotest_common.sh@945 -- # kill 3730264 00:30:19.663 Received shutdown signal, test time was about 2.000000 seconds 00:30:19.663 00:30:19.663 Latency(us) 00:30:19.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:19.663 =================================================================================================================== 00:30:19.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:19.663 22:29:14 -- common/autotest_common.sh@950 -- # wait 3730264 00:30:19.922 22:29:14 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:30:19.922 22:29:14 -- host/digest.sh@77 -- # local rw bs qd 00:30:19.922 22:29:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:19.922 22:29:14 -- host/digest.sh@80 -- # rw=randwrite 00:30:19.922 22:29:14 -- host/digest.sh@80 -- # bs=4096 00:30:19.922 22:29:14 -- host/digest.sh@80 -- # qd=128 00:30:19.922 22:29:14 -- host/digest.sh@82 -- # bperfpid=3730749 00:30:19.922 22:29:14 -- host/digest.sh@83 -- # waitforlisten 3730749 /var/tmp/bperf.sock 00:30:19.922 22:29:14 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:30:19.922 22:29:14 -- common/autotest_common.sh@819 -- # '[' -z 3730749 ']' 00:30:19.922 22:29:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:19.922 22:29:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:19.922 22:29:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:19.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:19.922 22:29:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:19.922 22:29:14 -- common/autotest_common.sh@10 -- # set +x 00:30:19.922 [2024-07-24 22:29:14.895126] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:19.922 [2024-07-24 22:29:14.895173] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730749 ] 00:30:19.922 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.922 [2024-07-24 22:29:14.948757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.922 [2024-07-24 22:29:14.983184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.922 22:29:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:19.922 22:29:15 -- common/autotest_common.sh@852 -- # return 0 00:30:19.922 22:29:15 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:19.922 22:29:15 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:19.922 22:29:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:20.182 22:29:15 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.182 22:29:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:20.441 nvme0n1 00:30:20.441 22:29:15 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:20.441 22:29:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:20.700 Running I/O for 2 seconds... 00:30:22.604 00:30:22.604 Latency(us) 00:30:22.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.604 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.604 nvme0n1 : 2.00 26624.54 104.00 0.00 0.00 4800.00 2706.92 29861.62 00:30:22.604 =================================================================================================================== 00:30:22.604 Total : 26624.54 104.00 0.00 0.00 4800.00 2706.92 29861.62 00:30:22.604 0 00:30:22.604 22:29:17 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:22.604 22:29:17 -- host/digest.sh@92 -- # get_accel_stats 00:30:22.604 22:29:17 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:22.604 22:29:17 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:22.604 | select(.opcode=="crc32c") 00:30:22.604 | "\(.module_name) \(.executed)"' 00:30:22.604 22:29:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:22.864 22:29:17 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:22.864 22:29:17 -- host/digest.sh@93 -- # exp_module=software 00:30:22.864 22:29:17 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:22.864 22:29:17 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:22.864 22:29:17 -- host/digest.sh@97 -- # killprocess 3730749 00:30:22.864 22:29:17 -- common/autotest_common.sh@926 -- # '[' -z 3730749 ']' 00:30:22.864 22:29:17 -- common/autotest_common.sh@930 -- # kill -0 3730749 00:30:22.864 22:29:17 -- common/autotest_common.sh@931 -- # uname 00:30:22.864 22:29:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:22.864 22:29:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3730749 00:30:22.864 22:29:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:22.864 22:29:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:22.864 22:29:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3730749' 00:30:22.864 killing process with pid 3730749 00:30:22.864 22:29:17 -- common/autotest_common.sh@945 -- # kill 3730749 00:30:22.864 Received shutdown signal, test time was about 2.000000 seconds 00:30:22.864 00:30:22.864 Latency(us) 00:30:22.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.864 =================================================================================================================== 00:30:22.864 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.864 22:29:17 -- common/autotest_common.sh@950 -- # wait 3730749 00:30:22.864 22:29:17 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:30:22.864 22:29:17 -- host/digest.sh@77 -- # local rw bs qd 00:30:22.864 22:29:17 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:30:22.864 22:29:17 -- host/digest.sh@80 -- # rw=randwrite 00:30:22.864 22:29:17 -- host/digest.sh@80 -- # bs=131072 00:30:22.864 22:29:17 -- host/digest.sh@80 -- # qd=16 00:30:22.864 22:29:17 -- host/digest.sh@82 -- # bperfpid=3731319 00:30:22.864 22:29:17 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:30:23.124 22:29:17 -- host/digest.sh@83 -- # waitforlisten 3731319 /var/tmp/bperf.sock 00:30:23.124 22:29:17 -- common/autotest_common.sh@819 -- # '[' -z 3731319 ']' 00:30:23.124 22:29:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:23.124 22:29:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:23.124 22:29:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:23.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:23.124 22:29:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:23.124 22:29:17 -- common/autotest_common.sh@10 -- # set +x 00:30:23.124 [2024-07-24 22:29:18.021268] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:23.124 [2024-07-24 22:29:18.021314] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731319 ] 00:30:23.124 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:23.124 Zero copy mechanism will not be used. 00:30:23.124 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.124 [2024-07-24 22:29:18.071076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.124 [2024-07-24 22:29:18.109796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.124 22:29:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:23.124 22:29:18 -- common/autotest_common.sh@852 -- # return 0 00:30:23.124 22:29:18 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:30:23.124 22:29:18 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:30:23.124 22:29:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:23.383 22:29:18 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:23.383 22:29:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:23.642 nvme0n1 00:30:23.901 22:29:18 -- host/digest.sh@91 -- # bperf_py perform_tests 00:30:23.901 22:29:18 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:23.901 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:23.901 Zero copy mechanism will not be used. 00:30:23.901 Running I/O for 2 seconds... 00:30:25.806 00:30:25.806 Latency(us) 00:30:25.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.806 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:25.806 nvme0n1 : 2.01 1457.87 182.23 0.00 0.00 10943.75 8149.26 40119.43 00:30:25.806 =================================================================================================================== 00:30:25.806 Total : 1457.87 182.23 0.00 0.00 10943.75 8149.26 40119.43 00:30:25.806 0 00:30:25.806 22:29:20 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:30:25.806 22:29:20 -- host/digest.sh@92 -- # get_accel_stats 00:30:25.806 22:29:20 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:30:25.806 22:29:20 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:30:25.806 | select(.opcode=="crc32c") 00:30:25.806 | "\(.module_name) \(.executed)"' 00:30:25.806 22:29:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:30:26.065 22:29:21 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:30:26.066 22:29:21 -- host/digest.sh@93 -- # exp_module=software 00:30:26.066 22:29:21 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:30:26.066 22:29:21 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:30:26.066 22:29:21 -- host/digest.sh@97 -- # killprocess 3731319 00:30:26.066 22:29:21 -- common/autotest_common.sh@926 -- # '[' -z 3731319 ']' 00:30:26.066 22:29:21 -- common/autotest_common.sh@930 -- # kill -0 3731319 00:30:26.066 22:29:21 -- common/autotest_common.sh@931 -- # uname 00:30:26.066 22:29:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:26.066 22:29:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3731319 00:30:26.066 22:29:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:26.066 22:29:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:26.066 22:29:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3731319' 00:30:26.066 killing process with pid 3731319 00:30:26.066 22:29:21 -- common/autotest_common.sh@945 -- # kill 3731319 00:30:26.066 Received shutdown signal, test time was about 2.000000 seconds 00:30:26.066 00:30:26.066 Latency(us) 00:30:26.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:26.066 =================================================================================================================== 00:30:26.066 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:26.066 22:29:21 -- common/autotest_common.sh@950 -- # wait 3731319 00:30:26.325 22:29:21 -- host/digest.sh@126 -- # killprocess 3729640 00:30:26.325 22:29:21 -- common/autotest_common.sh@926 -- # '[' -z 3729640 ']' 00:30:26.325 22:29:21 -- common/autotest_common.sh@930 -- # kill -0 3729640 00:30:26.325 22:29:21 -- common/autotest_common.sh@931 -- # uname 00:30:26.325 22:29:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:26.325 22:29:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3729640 00:30:26.325 22:29:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:26.325 22:29:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:26.325 22:29:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3729640' 00:30:26.325 killing process with pid 3729640 00:30:26.325 22:29:21 -- common/autotest_common.sh@945 -- # kill 3729640 00:30:26.325 22:29:21 -- common/autotest_common.sh@950 -- # wait 3729640 00:30:26.584 00:30:26.584 real 0m13.547s 00:30:26.584 user 0m26.665s 00:30:26.584 sys 0m3.287s 00:30:26.584 22:29:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.584 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:30:26.584 ************************************ 00:30:26.584 END TEST nvmf_digest_clean 00:30:26.584 ************************************ 00:30:26.584 22:29:21 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:30:26.584 22:29:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:26.584 22:29:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:26.584 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:30:26.584 ************************************ 00:30:26.584 START TEST nvmf_digest_error 00:30:26.584 ************************************ 00:30:26.584 22:29:21 -- common/autotest_common.sh@1104 -- # run_digest_error 00:30:26.584 22:29:21 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:30:26.584 22:29:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:26.584 22:29:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:26.584 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:30:26.584 22:29:21 -- nvmf/common.sh@469 -- # nvmfpid=3731948 00:30:26.584 22:29:21 -- nvmf/common.sh@470 -- # waitforlisten 3731948 00:30:26.584 22:29:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:30:26.584 22:29:21 -- common/autotest_common.sh@819 -- # '[' -z 3731948 ']' 00:30:26.584 22:29:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.584 22:29:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:26.584 22:29:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.584 22:29:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:26.584 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:30:26.584 [2024-07-24 22:29:21.587838] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:26.584 [2024-07-24 22:29:21.587885] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.584 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.584 [2024-07-24 22:29:21.645241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.584 [2024-07-24 22:29:21.683751] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:26.584 [2024-07-24 22:29:21.683876] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.584 [2024-07-24 22:29:21.683884] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.584 [2024-07-24 22:29:21.683890] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.584 [2024-07-24 22:29:21.683906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.584 22:29:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:26.585 22:29:21 -- common/autotest_common.sh@852 -- # return 0 00:30:26.585 22:29:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:26.585 22:29:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:26.585 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:30:26.843 22:29:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.843 22:29:21 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:30:26.843 22:29:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:26.843 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:30:26.843 [2024-07-24 22:29:21.752350] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:30:26.843 22:29:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:26.843 22:29:21 -- host/digest.sh@104 -- # common_target_config 00:30:26.843 22:29:21 -- host/digest.sh@43 -- # rpc_cmd 00:30:26.843 22:29:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:26.843 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:30:26.843 null0 00:30:26.843 [2024-07-24 22:29:21.840476] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.843 [2024-07-24 22:29:21.864643] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.843 22:29:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:26.843 22:29:21 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:30:26.843 22:29:21 -- host/digest.sh@54 -- # local rw bs qd 00:30:26.843 22:29:21 -- host/digest.sh@56 -- # rw=randread 00:30:26.843 22:29:21 -- host/digest.sh@56 -- # bs=4096 00:30:26.843 22:29:21 -- host/digest.sh@56 -- # qd=128 00:30:26.843 22:29:21 -- host/digest.sh@58 -- # bperfpid=3731973 00:30:26.843 22:29:21 -- host/digest.sh@60 -- # waitforlisten 3731973 /var/tmp/bperf.sock 00:30:26.843 22:29:21 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:30:26.843 22:29:21 -- common/autotest_common.sh@819 -- # '[' -z 3731973 ']' 00:30:26.843 22:29:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:26.843 22:29:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:26.843 22:29:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:26.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:26.843 22:29:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:26.843 22:29:21 -- common/autotest_common.sh@10 -- # set +x 00:30:26.843 [2024-07-24 22:29:21.913862] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:26.843 [2024-07-24 22:29:21.913907] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731973 ] 00:30:26.843 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.843 [2024-07-24 22:29:21.967881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.131 [2024-07-24 22:29:22.007882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.700 22:29:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:27.700 22:29:22 -- common/autotest_common.sh@852 -- # return 0 00:30:27.700 22:29:22 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:27.700 22:29:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:27.959 22:29:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:27.959 22:29:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:27.959 22:29:22 -- common/autotest_common.sh@10 -- # set +x 00:30:27.959 22:29:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:27.959 22:29:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:27.959 22:29:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:28.218 nvme0n1 00:30:28.218 22:29:23 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:28.218 22:29:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:28.218 22:29:23 -- common/autotest_common.sh@10 -- # set +x 00:30:28.218 22:29:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:28.218 22:29:23 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:28.218 22:29:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:28.218 Running I/O for 2 seconds... 00:30:28.218 [2024-07-24 22:29:23.228571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.228606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.228617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.238103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.238132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.238141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.246567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.246590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.246600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.256453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.256477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.256486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.265087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.265110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.265118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.275337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.275360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.275368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.283659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.283680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.283689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.293084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.293103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.293112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.301386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.301405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.301413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.310401] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.310420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.310434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.321135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.321155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.321162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.329165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.329184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.329192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.338122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.338141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.338150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.218 [2024-07-24 22:29:23.347557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.218 [2024-07-24 22:29:23.347576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.218 [2024-07-24 22:29:23.347585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.357522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.357542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.357550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.366427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.366446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.366454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.376032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.376057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.376065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.387390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.387408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.387416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.398330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.398354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.398362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.407473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.407492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.407500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.416640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.416660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.416668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.424514] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.424534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.424542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.437452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.437472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.437479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.448901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.448920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.448928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.456779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.456799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.456806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.464967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.464987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.464994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.476381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.476401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.476409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.489882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.489901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.489909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.501730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.476 [2024-07-24 22:29:23.501749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.476 [2024-07-24 22:29:23.501757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.476 [2024-07-24 22:29:23.510028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.510054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.510063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.518962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.518982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.518989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.528257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.528276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.528284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.541265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.541284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.541292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.550487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.550508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.550516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.559893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.559913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.559921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.567712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.567732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.567743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.577165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.577185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.577192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.585979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.585999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.586007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.594223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.594244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.594253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.477 [2024-07-24 22:29:23.603288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.477 [2024-07-24 22:29:23.603310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.477 [2024-07-24 22:29:23.603318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.611935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.611956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.611964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.620539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.620559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.620567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.629443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.629462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.629470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.638815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.638835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.638843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.647396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.647419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.647427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.655693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.655712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.655720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.664232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.664251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.664259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.673092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.673111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.673118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.681649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.681668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.681676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.690723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.690742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.690750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.698955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.698975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.698983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.708194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.708213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.708221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.716341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.716360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.716371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.725097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.725116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.725124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.733674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.733694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.733702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.742378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.742397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.742405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.751384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.751403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.751411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.759919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.759939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.759947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.768896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.768916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.768924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.777488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.777513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.777521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.786247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.786267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.786275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.794577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.794600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.794608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.802995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.803015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.803023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.812270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.812289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.812297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.820464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.820484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.820491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.829455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.829475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.735 [2024-07-24 22:29:23.829483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.735 [2024-07-24 22:29:23.837851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.735 [2024-07-24 22:29:23.837871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.736 [2024-07-24 22:29:23.837879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.736 [2024-07-24 22:29:23.847008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.736 [2024-07-24 22:29:23.847028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.736 [2024-07-24 22:29:23.847036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.736 [2024-07-24 22:29:23.855253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.736 [2024-07-24 22:29:23.855274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.736 [2024-07-24 22:29:23.855282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.736 [2024-07-24 22:29:23.864305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.736 [2024-07-24 22:29:23.864325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.736 [2024-07-24 22:29:23.864332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.873362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.873382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.873391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.881638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.881658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.881666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.890623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.890643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.890651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.899056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.899076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.899084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.908035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.908060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.908068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.916484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.916504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.916512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.925347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.925367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.925374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.933947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.933966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.933973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.942896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.942914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.942925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.951099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.951117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.951125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.959659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.959678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.959686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.968665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.968685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.968692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.977168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.977187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.977195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.986602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.986621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.986629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:23.996731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:23.996750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:23.996758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.004670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:24.004689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:24.004697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.014781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:24.014800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:24.014807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.023035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:24.023063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:24.023071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.033664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:24.033684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:24.033693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.045114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:24.045133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:24.045141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.057008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:24.057027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:24.057035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.065689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:24.065708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:24.065717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.073631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:24.073649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:24.073657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.084181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.995 [2024-07-24 22:29:24.084200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.995 [2024-07-24 22:29:24.084208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.995 [2024-07-24 22:29:24.095412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.996 [2024-07-24 22:29:24.095431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.996 [2024-07-24 22:29:24.095439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.996 [2024-07-24 22:29:24.105855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.996 [2024-07-24 22:29:24.105874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.996 [2024-07-24 22:29:24.105882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.996 [2024-07-24 22:29:24.113613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.996 [2024-07-24 22:29:24.113632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.996 [2024-07-24 22:29:24.113640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:28.996 [2024-07-24 22:29:24.123001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:28.996 [2024-07-24 22:29:24.123020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.996 [2024-07-24 22:29:24.123028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.255 [2024-07-24 22:29:24.132145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.255 [2024-07-24 22:29:24.132165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.255 [2024-07-24 22:29:24.132173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.255 [2024-07-24 22:29:24.141929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.255 [2024-07-24 22:29:24.141950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.255 [2024-07-24 22:29:24.141958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.255 [2024-07-24 22:29:24.149998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.255 [2024-07-24 22:29:24.150017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.255 [2024-07-24 22:29:24.150024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.255 [2024-07-24 22:29:24.158740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.255 [2024-07-24 22:29:24.158759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.255 [2024-07-24 22:29:24.158767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.255 [2024-07-24 22:29:24.167624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.255 [2024-07-24 22:29:24.167643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.255 [2024-07-24 22:29:24.167650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.255 [2024-07-24 22:29:24.178112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.255 [2024-07-24 22:29:24.178132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.255 [2024-07-24 22:29:24.178139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.255 [2024-07-24 22:29:24.189388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.255 [2024-07-24 22:29:24.189408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.255 [2024-07-24 22:29:24.189420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.255 [2024-07-24 22:29:24.200103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.255 [2024-07-24 22:29:24.200122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.255 [2024-07-24 22:29:24.200129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.255 [2024-07-24 22:29:24.210338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.255 [2024-07-24 22:29:24.210357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.210365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.219914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.219933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.219940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.230469] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.230489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.230496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.240974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.240994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.241001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.252101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.252120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.252128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.262014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.262032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.262040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.272544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.272564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.272572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.283569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.283588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.283596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.293900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.293919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.293927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.303678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.303697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.303705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.314385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.314404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.314411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.325267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.325285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.325293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.336436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.336455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.336463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.346827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.346845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.346853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.357118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.357136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.357143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.367387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.367405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.367416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.256 [2024-07-24 22:29:24.378250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.256 [2024-07-24 22:29:24.378268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.256 [2024-07-24 22:29:24.378276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.516 [2024-07-24 22:29:24.389609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.516 [2024-07-24 22:29:24.389629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.516 [2024-07-24 22:29:24.389637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.516 [2024-07-24 22:29:24.399925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.516 [2024-07-24 22:29:24.399944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.516 [2024-07-24 22:29:24.399951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.516 [2024-07-24 22:29:24.410599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.516 [2024-07-24 22:29:24.410618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.516 [2024-07-24 22:29:24.410626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.516 [2024-07-24 22:29:24.419850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.516 [2024-07-24 22:29:24.419870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.516 [2024-07-24 22:29:24.419878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.516 [2024-07-24 22:29:24.431155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.516 [2024-07-24 22:29:24.431174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.516 [2024-07-24 22:29:24.431182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.516 [2024-07-24 22:29:24.442274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.516 [2024-07-24 22:29:24.442293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.516 [2024-07-24 22:29:24.442301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.516 [2024-07-24 22:29:24.451551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.516 [2024-07-24 22:29:24.451570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:11980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.451578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.462211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.462233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.462241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.472273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.472292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.472300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.483143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.483162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.483170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.494008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.494027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.494035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.504695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.504714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.504721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.515456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.515476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.515484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.525511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.525530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.525539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.536271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.536289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.536296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.548093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.548113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.548120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.558874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.558893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.558901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.570702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.570721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.570728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.580830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.580849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.580856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.591063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.591081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.591089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.601740] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.601759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.601766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.611475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.611494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.611502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.621516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.621535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.621543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.632174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.632193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.632200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.517 [2024-07-24 22:29:24.642639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.517 [2024-07-24 22:29:24.642657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.517 [2024-07-24 22:29:24.642668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.652280] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.652300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.652308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.663531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.663549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.663557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.673841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.673860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.673868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.684096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.684114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.684122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.694072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.694091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.694100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.704748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.704766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.704774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.715311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.715330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.715338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.726148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.726167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.726175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.736878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.736896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.736904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.746815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.746834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.746841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.756566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.756585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.756592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.766507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.766526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.766534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.776934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.776954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.776962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.786961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.786979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.786986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.797557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.797575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.797582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.808279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.777 [2024-07-24 22:29:24.808298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.777 [2024-07-24 22:29:24.808306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.777 [2024-07-24 22:29:24.817984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.778 [2024-07-24 22:29:24.818003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.778 [2024-07-24 22:29:24.818013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.778 [2024-07-24 22:29:24.828534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.778 [2024-07-24 22:29:24.828553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.778 [2024-07-24 22:29:24.828561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.778 [2024-07-24 22:29:24.839215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.778 [2024-07-24 22:29:24.839234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.778 [2024-07-24 22:29:24.839242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.778 [2024-07-24 22:29:24.849644] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.778 [2024-07-24 22:29:24.849663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.778 [2024-07-24 22:29:24.849671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.778 [2024-07-24 22:29:24.860543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.778 [2024-07-24 22:29:24.860561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.778 [2024-07-24 22:29:24.860569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.778 [2024-07-24 22:29:24.870918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.778 [2024-07-24 22:29:24.870938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.778 [2024-07-24 22:29:24.870946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.778 [2024-07-24 22:29:24.880756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.778 [2024-07-24 22:29:24.880775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.778 [2024-07-24 22:29:24.880783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.778 [2024-07-24 22:29:24.890729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.778 [2024-07-24 22:29:24.890748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.778 [2024-07-24 22:29:24.890756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:29.778 [2024-07-24 22:29:24.901245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:29.778 [2024-07-24 22:29:24.901265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.778 [2024-07-24 22:29:24.901272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:24.911388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:24.911420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:24.911428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:24.921761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:24.921780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:24.921789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:24.933252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:24.933271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:24.933279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:24.944445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:24.944465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:24.944472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:24.954943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:24.954962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:24.954969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:24.964897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:24.964916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:24.964923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:24.975798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:24.975816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:24.975823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:24.985938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:24.985957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:24.985966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:24.996577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:24.996596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:24.996604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.007413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.007432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.007440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.017313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.017333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.017341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.027638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.027658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.027667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.037893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.037913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.037922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.048821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.048844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.048854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.059436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.059456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.059464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.070533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.070552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.070560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.080715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.080733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.080741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.090241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.090260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.090272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.101486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.101505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.101512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.111929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.111949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.111956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.122180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.122200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.122208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.131671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.131690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.131698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.139547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.139567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.139575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.148431] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.038 [2024-07-24 22:29:25.148450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.038 [2024-07-24 22:29:25.148458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.038 [2024-07-24 22:29:25.156883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.039 [2024-07-24 22:29:25.156902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.039 [2024-07-24 22:29:25.156910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.039 [2024-07-24 22:29:25.165900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.039 [2024-07-24 22:29:25.165919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.039 [2024-07-24 22:29:25.165927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.298 [2024-07-24 22:29:25.174807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.298 [2024-07-24 22:29:25.174829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.298 [2024-07-24 22:29:25.174837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.298 [2024-07-24 22:29:25.184994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.298 [2024-07-24 22:29:25.185014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.298 [2024-07-24 22:29:25.185022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.298 [2024-07-24 22:29:25.192965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.298 [2024-07-24 22:29:25.192985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.298 [2024-07-24 22:29:25.192993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.298 [2024-07-24 22:29:25.201471] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14df740) 00:30:30.298 [2024-07-24 22:29:25.201491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:30.298 [2024-07-24 22:29:25.201498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:30.298 00:30:30.298 Latency(us) 00:30:30.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.298 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:30.298 nvme0n1 : 2.00 25923.92 101.27 0.00 0.00 4932.83 2478.97 24276.81 00:30:30.298 =================================================================================================================== 00:30:30.298 Total : 25923.92 101.27 0.00 0.00 4932.83 2478.97 24276.81 00:30:30.298 0 00:30:30.298 22:29:25 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:30.298 22:29:25 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:30.298 22:29:25 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:30.298 | .driver_specific 00:30:30.298 | .nvme_error 00:30:30.298 | .status_code 00:30:30.298 | .command_transient_transport_error' 00:30:30.298 22:29:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:30.298 22:29:25 -- host/digest.sh@71 -- # (( 203 > 0 )) 00:30:30.298 22:29:25 -- host/digest.sh@73 -- # killprocess 3731973 00:30:30.298 22:29:25 -- common/autotest_common.sh@926 -- # '[' -z 3731973 ']' 00:30:30.298 22:29:25 -- common/autotest_common.sh@930 -- # kill -0 3731973 00:30:30.298 22:29:25 -- common/autotest_common.sh@931 -- # uname 00:30:30.298 22:29:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:30.298 22:29:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3731973 00:30:30.558 22:29:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:30.558 22:29:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:30.558 22:29:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3731973' 00:30:30.558 killing process with pid 3731973 00:30:30.558 22:29:25 -- common/autotest_common.sh@945 -- # kill 3731973 00:30:30.558 Received shutdown signal, test time was about 2.000000 seconds 00:30:30.558 00:30:30.558 Latency(us) 00:30:30.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.558 =================================================================================================================== 00:30:30.558 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:30.558 22:29:25 -- common/autotest_common.sh@950 -- # wait 3731973 00:30:30.558 22:29:25 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:30:30.558 22:29:25 -- host/digest.sh@54 -- # local rw bs qd 00:30:30.558 22:29:25 -- host/digest.sh@56 -- # rw=randread 00:30:30.558 22:29:25 -- host/digest.sh@56 -- # bs=131072 00:30:30.558 22:29:25 -- host/digest.sh@56 -- # qd=16 00:30:30.558 22:29:25 -- host/digest.sh@58 -- # bperfpid=3732680 00:30:30.558 22:29:25 -- host/digest.sh@60 -- # waitforlisten 3732680 /var/tmp/bperf.sock 00:30:30.558 22:29:25 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:30:30.558 22:29:25 -- common/autotest_common.sh@819 -- # '[' -z 3732680 ']' 00:30:30.558 22:29:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:30.558 22:29:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:30.558 22:29:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:30.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:30.558 22:29:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:30.558 22:29:25 -- common/autotest_common.sh@10 -- # set +x 00:30:30.558 [2024-07-24 22:29:25.663638] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:30.558 [2024-07-24 22:29:25.663699] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732680 ] 00:30:30.558 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:30.558 Zero copy mechanism will not be used. 00:30:30.558 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.817 [2024-07-24 22:29:25.716656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.817 [2024-07-24 22:29:25.754823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.383 22:29:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:31.383 22:29:26 -- common/autotest_common.sh@852 -- # return 0 00:30:31.383 22:29:26 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:31.383 22:29:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:31.640 22:29:26 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:31.640 22:29:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:31.640 22:29:26 -- common/autotest_common.sh@10 -- # set +x 00:30:31.640 22:29:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:31.640 22:29:26 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:31.640 22:29:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:31.898 nvme0n1 00:30:31.898 22:29:27 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:31.898 22:29:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:31.898 22:29:27 -- common/autotest_common.sh@10 -- # set +x 00:30:31.898 22:29:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:31.898 22:29:27 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:31.898 22:29:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:32.157 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:32.157 Zero copy mechanism will not be used. 00:30:32.157 Running I/O for 2 seconds... 00:30:32.157 [2024-07-24 22:29:27.138752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.138785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.138796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.157 [2024-07-24 22:29:27.153796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.153819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.153828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.157 [2024-07-24 22:29:27.166886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.166906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.166914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.157 [2024-07-24 22:29:27.179991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.180012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.180020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.157 [2024-07-24 22:29:27.193146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.193166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.193175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.157 [2024-07-24 22:29:27.206241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.206261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.206269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.157 [2024-07-24 22:29:27.219325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.219344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.219352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.157 [2024-07-24 22:29:27.232386] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.232406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.232414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.157 [2024-07-24 22:29:27.245602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.245621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.245629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.157 [2024-07-24 22:29:27.258769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.157 [2024-07-24 22:29:27.258792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.157 [2024-07-24 22:29:27.258800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.158 [2024-07-24 22:29:27.271917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.158 [2024-07-24 22:29:27.271937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.158 [2024-07-24 22:29:27.271945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.158 [2024-07-24 22:29:27.285253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.158 [2024-07-24 22:29:27.285273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.158 [2024-07-24 22:29:27.285280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.298895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.298914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.298922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.311989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.312009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.312016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.325320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.325340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.325347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.338669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.338689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.338697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.351826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.351846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.351853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.365064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.365083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.365090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.378145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.378165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.378173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.391211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.391231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.391239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.404374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.404394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.404402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.417436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.417457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.417465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.430763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.430783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.430791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.443820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.443839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.443847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.456942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.456962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.456969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.470003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.470022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.470030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.483099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.483118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.483129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.496088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.496108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.496116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.509331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.509351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.509358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.522381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.522400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.522408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.535716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.535735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.535742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.417 [2024-07-24 22:29:27.548824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.417 [2024-07-24 22:29:27.548844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.417 [2024-07-24 22:29:27.548852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.561932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.561951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.561959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.575074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.575093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.575100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.587914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.587933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.587940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.601141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.601164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.601171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.614284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.614304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.614311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.627534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.627553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.627561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.640522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.640541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.640548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.653573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.653592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.653600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.666572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.666591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.666599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.679778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.679798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.679806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.692750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.692770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.692777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.705893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.705912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.705919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.718881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.718900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.718908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.732017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.732036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.732049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.745092] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.745111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.745118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.758234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.758253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.758261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.771206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.771225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.677 [2024-07-24 22:29:27.771232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.677 [2024-07-24 22:29:27.784440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.677 [2024-07-24 22:29:27.784459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.678 [2024-07-24 22:29:27.784467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.678 [2024-07-24 22:29:27.797490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.678 [2024-07-24 22:29:27.797509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.678 [2024-07-24 22:29:27.797517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.810653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.810673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.810680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.823676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.823696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.823707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.836815] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.836834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.836842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.849783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.849802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.849809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.862916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.862936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.862943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.876084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.876104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.876112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.889394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.889416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.889424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.902774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.902795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.902802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.916052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.916073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.916081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.929146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.929167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.929175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.942598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.942619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.942627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.955902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.955923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.955931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.969110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.969130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.969138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.982255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.982275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.982283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:27.995398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:27.995418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:27.995425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:28.008661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:28.008681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:28.008689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:28.022066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:28.022086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:28.022094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:28.035447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:28.035467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:28.035475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:28.048783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:28.048803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:28.048814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.937 [2024-07-24 22:29:28.062030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:32.937 [2024-07-24 22:29:28.062056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.937 [2024-07-24 22:29:28.062064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.075138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.075158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.075166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.088285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.088305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.088313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.101433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.101453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.101461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.114633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.114653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.114660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.127774] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.127794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.127801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.140840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.140860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.140867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.154263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.154291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.167364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.167387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.167395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.180545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.180565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.180572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.193645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.193665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.193672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.206817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.206837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.206845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.220148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.220168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.220176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.233260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.233280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.233287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.246504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.246524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.246532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.259448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.259468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.259476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.272751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.272771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.272778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.285927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.285947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.285954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.299133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.197 [2024-07-24 22:29:28.299153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.197 [2024-07-24 22:29:28.299160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.197 [2024-07-24 22:29:28.312910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.198 [2024-07-24 22:29:28.312929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.198 [2024-07-24 22:29:28.312937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.198 [2024-07-24 22:29:28.326519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.198 [2024-07-24 22:29:28.326539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.198 [2024-07-24 22:29:28.326546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.456 [2024-07-24 22:29:28.340097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.456 [2024-07-24 22:29:28.340117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.456 [2024-07-24 22:29:28.340124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.456 [2024-07-24 22:29:28.360213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.456 [2024-07-24 22:29:28.360232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.456 [2024-07-24 22:29:28.360239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.376630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.376649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.376657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.390233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.390253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.390261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.403802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.403824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.403834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.419014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.419034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.419046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.434448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.434469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.434476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.449500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.449519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.449528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.465004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.465024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.465032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.481336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.481356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.481364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.502562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.502583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.502590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.520407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.520427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.520435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.533775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.533794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.533802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.552999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.553021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.553029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.457 [2024-07-24 22:29:28.569788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.457 [2024-07-24 22:29:28.569807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.457 [2024-07-24 22:29:28.569815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.593004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.593024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.593032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.612568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.612588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.612595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.629643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.629662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.629670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.645765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.645786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.645795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.660728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.660748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.660755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.675457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.675477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.675485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.689613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.689632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.689644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.704772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.704791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.704799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.724343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.724362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.724370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.741842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.741862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.741869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.761779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.761798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.761806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.775493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.775512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.775519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.795400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.795419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.795426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.812253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.812272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.812280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.716 [2024-07-24 22:29:28.834904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.716 [2024-07-24 22:29:28.834923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.716 [2024-07-24 22:29:28.834931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:28.855712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:28.855735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:28.855742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:28.872999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:28.873018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:28.873026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:28.896027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:28.896051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:28.896059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:28.916773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:28.916793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:28.916800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:28.933605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:28.933624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:28.933632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:28.955841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:28.955863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:28.955870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:28.975000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:28.975020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:28.975027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:28.996643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:28.996662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:28.996669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:29.010984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:29.011003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:29.011011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:29.024120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:29.024139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:29.024147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:29.037402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:29.037421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:29.037428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:29.050636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:29.050656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:29.050663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:29.064006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:29.064026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:29.064034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:29.077188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.975 [2024-07-24 22:29:29.077207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.975 [2024-07-24 22:29:29.077214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:33.975 [2024-07-24 22:29:29.090346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.976 [2024-07-24 22:29:29.090366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.976 [2024-07-24 22:29:29.090373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:33.976 [2024-07-24 22:29:29.103351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xee2630) 00:30:33.976 [2024-07-24 22:29:29.103371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:33.976 [2024-07-24 22:29:29.103378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:34.234 00:30:34.234 Latency(us) 00:30:34.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.234 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:34.234 nvme0n1 : 2.00 2126.38 265.80 0.00 0.00 7521.47 6354.14 28607.89 00:30:34.234 =================================================================================================================== 00:30:34.234 Total : 2126.38 265.80 0.00 0.00 7521.47 6354.14 28607.89 00:30:34.234 0 00:30:34.234 22:29:29 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:34.234 22:29:29 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:34.234 22:29:29 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:34.234 | .driver_specific 00:30:34.234 | .nvme_error 00:30:34.234 | .status_code 00:30:34.234 | .command_transient_transport_error' 00:30:34.234 22:29:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:34.234 22:29:29 -- host/digest.sh@71 -- # (( 137 > 0 )) 00:30:34.234 22:29:29 -- host/digest.sh@73 -- # killprocess 3732680 00:30:34.234 22:29:29 -- common/autotest_common.sh@926 -- # '[' -z 3732680 ']' 00:30:34.234 22:29:29 -- common/autotest_common.sh@930 -- # kill -0 3732680 00:30:34.234 22:29:29 -- common/autotest_common.sh@931 -- # uname 00:30:34.234 22:29:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:34.234 22:29:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3732680 00:30:34.234 22:29:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:34.234 22:29:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:34.234 22:29:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3732680' 00:30:34.234 killing process with pid 3732680 00:30:34.234 22:29:29 -- common/autotest_common.sh@945 -- # kill 3732680 00:30:34.234 Received shutdown signal, test time was about 2.000000 seconds 00:30:34.234 00:30:34.234 Latency(us) 00:30:34.234 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.234 =================================================================================================================== 00:30:34.234 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.234 22:29:29 -- common/autotest_common.sh@950 -- # wait 3732680 00:30:34.493 22:29:29 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:30:34.493 22:29:29 -- host/digest.sh@54 -- # local rw bs qd 00:30:34.493 22:29:29 -- host/digest.sh@56 -- # rw=randwrite 00:30:34.493 22:29:29 -- host/digest.sh@56 -- # bs=4096 00:30:34.493 22:29:29 -- host/digest.sh@56 -- # qd=128 00:30:34.493 22:29:29 -- host/digest.sh@58 -- # bperfpid=3733273 00:30:34.493 22:29:29 -- host/digest.sh@60 -- # waitforlisten 3733273 /var/tmp/bperf.sock 00:30:34.493 22:29:29 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:34.493 22:29:29 -- common/autotest_common.sh@819 -- # '[' -z 3733273 ']' 00:30:34.493 22:29:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:34.493 22:29:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:34.493 22:29:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:34.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:34.493 22:29:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:34.493 22:29:29 -- common/autotest_common.sh@10 -- # set +x 00:30:34.493 [2024-07-24 22:29:29.558766] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:34.493 [2024-07-24 22:29:29.558810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733273 ] 00:30:34.493 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.493 [2024-07-24 22:29:29.611847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.753 [2024-07-24 22:29:29.651915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.320 22:29:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:35.320 22:29:30 -- common/autotest_common.sh@852 -- # return 0 00:30:35.320 22:29:30 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:35.320 22:29:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:35.579 22:29:30 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:35.579 22:29:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.579 22:29:30 -- common/autotest_common.sh@10 -- # set +x 00:30:35.579 22:29:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.579 22:29:30 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:35.579 22:29:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:35.837 nvme0n1 00:30:35.837 22:29:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:35.837 22:29:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.837 22:29:30 -- common/autotest_common.sh@10 -- # set +x 00:30:35.837 22:29:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.837 22:29:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:35.837 22:29:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:36.096 Running I/O for 2 seconds... 00:30:36.096 [2024-07-24 22:29:31.088106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.096 [2024-07-24 22:29:31.088835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.096 [2024-07-24 22:29:31.088863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.096 [2024-07-24 22:29:31.097488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.096 [2024-07-24 22:29:31.097752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.096 [2024-07-24 22:29:31.097774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.096 [2024-07-24 22:29:31.106844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.096 [2024-07-24 22:29:31.107106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.096 [2024-07-24 22:29:31.107125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.096 [2024-07-24 22:29:31.116226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.096 [2024-07-24 22:29:31.116485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.096 [2024-07-24 22:29:31.116504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.096 [2024-07-24 22:29:31.125558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.096 [2024-07-24 22:29:31.125819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.096 [2024-07-24 22:29:31.125837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.096 [2024-07-24 22:29:31.134912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.096 [2024-07-24 22:29:31.135176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.096 [2024-07-24 22:29:31.135194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.096 [2024-07-24 22:29:31.144197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.096 [2024-07-24 22:29:31.144456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.096 [2024-07-24 22:29:31.144481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.096 [2024-07-24 22:29:31.153467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.096 [2024-07-24 22:29:31.153727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.096 [2024-07-24 22:29:31.153746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.097 [2024-07-24 22:29:31.162786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.097 [2024-07-24 22:29:31.163060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.097 [2024-07-24 22:29:31.163079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.097 [2024-07-24 22:29:31.172127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.097 [2024-07-24 22:29:31.172386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.097 [2024-07-24 22:29:31.172404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.097 [2024-07-24 22:29:31.181410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.097 [2024-07-24 22:29:31.181829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.097 [2024-07-24 22:29:31.181847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.097 [2024-07-24 22:29:31.190727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.097 [2024-07-24 22:29:31.190976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.097 [2024-07-24 22:29:31.190994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.097 [2024-07-24 22:29:31.200017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.097 [2024-07-24 22:29:31.200305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.097 [2024-07-24 22:29:31.200323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.097 [2024-07-24 22:29:31.209344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.097 [2024-07-24 22:29:31.209597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.097 [2024-07-24 22:29:31.209615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.097 [2024-07-24 22:29:31.218618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.097 [2024-07-24 22:29:31.218872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.097 [2024-07-24 22:29:31.218890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.097 [2024-07-24 22:29:31.228055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.097 [2024-07-24 22:29:31.228323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.097 [2024-07-24 22:29:31.228345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.237657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.237942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.237960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.247001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.247269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.247288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.256255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.256514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.256531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.265557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.265817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.265835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.274770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.275026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.275048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.283991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.284260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.284278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.293190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.293448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.293466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.302393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.302652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.302669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.311637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.311892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.311910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.320851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.321113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.321131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.330129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.330392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.330410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.339391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.339651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.339669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.348730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.348989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.349007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.357955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.358213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.358231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.367229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.367487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.367505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.376442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.376698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.376716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.385647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.385901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.385919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.394916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.395171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.395190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.404188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.404453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.404472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.413444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.413708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.413726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.422529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.422789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.422806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.431760] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.432021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.432039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.357 [2024-07-24 22:29:31.440997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.357 [2024-07-24 22:29:31.441264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.357 [2024-07-24 22:29:31.441282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.358 [2024-07-24 22:29:31.450195] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.358 [2024-07-24 22:29:31.450456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.358 [2024-07-24 22:29:31.450474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.358 [2024-07-24 22:29:31.459430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.358 [2024-07-24 22:29:31.459690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.358 [2024-07-24 22:29:31.459709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.358 [2024-07-24 22:29:31.468658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.358 [2024-07-24 22:29:31.468918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.358 [2024-07-24 22:29:31.468939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.358 [2024-07-24 22:29:31.477858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.358 [2024-07-24 22:29:31.478126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.358 [2024-07-24 22:29:31.478145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.358 [2024-07-24 22:29:31.487259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.358 [2024-07-24 22:29:31.487532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.358 [2024-07-24 22:29:31.487550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.617 [2024-07-24 22:29:31.497056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.617 [2024-07-24 22:29:31.497323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.617 [2024-07-24 22:29:31.497341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.617 [2024-07-24 22:29:31.506307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.617 [2024-07-24 22:29:31.506568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.617 [2024-07-24 22:29:31.506586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.617 [2024-07-24 22:29:31.515549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.617 [2024-07-24 22:29:31.515809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.617 [2024-07-24 22:29:31.515828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.617 [2024-07-24 22:29:31.524757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.617 [2024-07-24 22:29:31.525012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.617 [2024-07-24 22:29:31.525031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.617 [2024-07-24 22:29:31.534020] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.617 [2024-07-24 22:29:31.534281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.617 [2024-07-24 22:29:31.534300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.617 [2024-07-24 22:29:31.543187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.617 [2024-07-24 22:29:31.543448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.617 [2024-07-24 22:29:31.543466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.617 [2024-07-24 22:29:31.552384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.617 [2024-07-24 22:29:31.552646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.617 [2024-07-24 22:29:31.552664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.617 [2024-07-24 22:29:31.561635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.561893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.561911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.570847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.571108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.571127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.580070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.580329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.580347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.589296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.589553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.589571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.598600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.598859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.598877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.607947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.608207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.608225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.617172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.617432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.617450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.626358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.626619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.626638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.635484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.635746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.635764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.644689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.644951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.644969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.653884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.654145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.654163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.663104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.663368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.663386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.672316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.672579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.672596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.681533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.681799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.681817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.690706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.690982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.691001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.699914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.700182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.700200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.709192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.709455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.709476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.718409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.718668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.718687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.727605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.727862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.727880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.736851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.737115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.737133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.618 [2024-07-24 22:29:31.746186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.618 [2024-07-24 22:29:31.746455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.618 [2024-07-24 22:29:31.746474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.755853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.756148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.756167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.765127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.765393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.765411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.774398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.774663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.774681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.783618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.783880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.783898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.792842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.793111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.793129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.802068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.802332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.802350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.811333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.811595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.811613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.820500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.820758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.820777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.829734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.829988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.830006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.838984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.839266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.839284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.848255] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.848515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.848532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.857627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.857888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.857905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.866863] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.867125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.867143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.876114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.876368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.876386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.885366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.885625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.885643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.879 [2024-07-24 22:29:31.894594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.879 [2024-07-24 22:29:31.894853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.879 [2024-07-24 22:29:31.894871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.903818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.904080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.904098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.913048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.913309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.913327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.922241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.922502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.922520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.931381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.931652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.931670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.940626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.940882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.940900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.949838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.950100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.950121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.959097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.959353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.959371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.968316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.968575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.968593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.977526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.977788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.977805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.986755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.987018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.987037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:31.995956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:31.996220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:31.996240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:36.880 [2024-07-24 22:29:32.005278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:36.880 [2024-07-24 22:29:32.005538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:36.880 [2024-07-24 22:29:32.005556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.014891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.015159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.015177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.024241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.024503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.024521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.033515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.033780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.033798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.042766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.043029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.043052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.051993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.052253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.052271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.061247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.061502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.061519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.070468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.070723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.070741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.079709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.079970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.079988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.088868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.089131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.089148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.098141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.098412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.098429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.107633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.107915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.107933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.116900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.117167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.117185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.126103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.126368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.126385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.135356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.135622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.135640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.144490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.144750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.144768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.153715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.153970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.153988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.162969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.140 [2024-07-24 22:29:32.163236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.140 [2024-07-24 22:29:32.163255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.140 [2024-07-24 22:29:32.172189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.172450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.172468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.181440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.181702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.181720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.190651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.190916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.190934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.199891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.200152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.209171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.209433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.209451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.218384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.218645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.218663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.227624] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.227879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.227897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.237006] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.237271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.237289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.246390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.246668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.246687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.255724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.255990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.256008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.141 [2024-07-24 22:29:32.264972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.141 [2024-07-24 22:29:32.265238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.141 [2024-07-24 22:29:32.265256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.274609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.274870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.274892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.284051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.284315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.284333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.293243] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.293504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.293522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.302486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.302745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.302763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.311735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.311995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.312013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.320984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.321250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.321268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.330217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.330489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.330507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.339467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.339749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.339768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.348732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.349082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.349100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.358148] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.358408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.358425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.367350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.367609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.367627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.376634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.376894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.376912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.385897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.386155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.401 [2024-07-24 22:29:32.386173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.401 [2024-07-24 22:29:32.395134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.401 [2024-07-24 22:29:32.395391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.395408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.404450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.404710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.404727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.413676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.413938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.413956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.423069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.423334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.423353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.432320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.432575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.432594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.441541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.441794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.441811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.450780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.451040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.451064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.460035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.460303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.460320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.469282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.469543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.469561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.478470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.478730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.478748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.487707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.487975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.487992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.497101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.497362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.497381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.506374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.506636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.506654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.515587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.515851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.515871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.402 [2024-07-24 22:29:32.524831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.402 [2024-07-24 22:29:32.525101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.402 [2024-07-24 22:29:32.525119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.534418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.534690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.534709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.543871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.544136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.544153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.553137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.553397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.553415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.562406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.562665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.562684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.571647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.571896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.571914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.580986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.581245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.581262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.590261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.590517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.590535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.599486] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.599762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.599780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.608908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.609561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.609579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.618322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.618572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.618590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.627616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.627864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.627882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.636926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.637314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.637333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.646192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.646437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.646456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.655502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.655751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.662 [2024-07-24 22:29:32.655769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.662 [2024-07-24 22:29:32.664738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.662 [2024-07-24 22:29:32.664987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.665005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.674030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.674362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.674379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.683358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.683604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.683622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.692602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.692854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.692872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.701927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.702189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.702207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.711232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.711478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.711496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.720499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.720748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.720766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.729785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.730331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.730349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.738984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.739239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.739256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.748296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.748540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.748558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.757601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.757848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.757871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.766728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fef90 00:30:37.663 [2024-07-24 22:29:32.768746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.768764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.780265] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190feb58 00:30:37.663 [2024-07-24 22:29:32.781173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.781191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:37.663 [2024-07-24 22:29:32.790453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fd640 00:30:37.663 [2024-07-24 22:29:32.790724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.663 [2024-07-24 22:29:32.790742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.800084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fd640 00:30:37.923 [2024-07-24 22:29:32.800458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.800476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.811204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fd640 00:30:37.923 [2024-07-24 22:29:32.812943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.812961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.823328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.824475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.824493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.832539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.832783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.832801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.841843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.842081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.842099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.851078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.851320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.851337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.860467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.860708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.860726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.869757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.869999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.870017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.878993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.879242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.879261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.888262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.888706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.888723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.897547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.897787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.897806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.906808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.907051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.907070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.916060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fdeb0 00:30:37.923 [2024-07-24 22:29:32.917384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.917401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.931665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fda78 00:30:37.923 [2024-07-24 22:29:32.932581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.932599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.940955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190f96f8 00:30:37.923 [2024-07-24 22:29:32.941390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.941408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.950322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190f96f8 00:30:37.923 [2024-07-24 22:29:32.950541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.950559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.959544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190f96f8 00:30:37.923 [2024-07-24 22:29:32.959759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.923 [2024-07-24 22:29:32.959777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:37.923 [2024-07-24 22:29:32.968845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190f96f8 00:30:37.924 [2024-07-24 22:29:32.969066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:32.969084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:37.924 [2024-07-24 22:29:32.978085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190f96f8 00:30:37.924 [2024-07-24 22:29:32.979192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:32.979210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:37.924 [2024-07-24 22:29:32.990202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190fc128 00:30:37.924 [2024-07-24 22:29:32.991160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:32.991178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:37.924 [2024-07-24 22:29:32.999305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190f3a28 00:30:37.924 [2024-07-24 22:29:33.000384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:33.000403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:37.924 [2024-07-24 22:29:33.008193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190ebfd0 00:30:37.924 [2024-07-24 22:29:33.009459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:33.009477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:37.924 [2024-07-24 22:29:33.017129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190f4298 00:30:37.924 [2024-07-24 22:29:33.018130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:33.018154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:37.924 [2024-07-24 22:29:33.026037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190ecc78 00:30:37.924 [2024-07-24 22:29:33.027068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:33.027088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:37.924 [2024-07-24 22:29:33.034308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190ea680 00:30:37.924 [2024-07-24 22:29:33.035088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:33.035106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:37.924 [2024-07-24 22:29:33.044191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190eb328 00:30:37.924 [2024-07-24 22:29:33.045589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:33.045607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:37.924 [2024-07-24 22:29:33.053109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3910) with pdu=0x2000190f4298 00:30:37.924 [2024-07-24 22:29:33.054251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.924 [2024-07-24 22:29:33.054269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:38.183 00:30:38.183 Latency(us) 00:30:38.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.183 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.183 nvme0n1 : 2.00 26890.69 105.04 0.00 0.00 4755.19 3362.28 32824.99 00:30:38.183 =================================================================================================================== 00:30:38.183 Total : 26890.69 105.04 0.00 0.00 4755.19 3362.28 32824.99 00:30:38.183 0 00:30:38.183 22:29:33 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:38.183 22:29:33 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:38.183 22:29:33 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:38.183 | .driver_specific 00:30:38.183 | .nvme_error 00:30:38.183 | .status_code 00:30:38.183 | .command_transient_transport_error' 00:30:38.183 22:29:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:38.183 22:29:33 -- host/digest.sh@71 -- # (( 211 > 0 )) 00:30:38.183 22:29:33 -- host/digest.sh@73 -- # killprocess 3733273 00:30:38.183 22:29:33 -- common/autotest_common.sh@926 -- # '[' -z 3733273 ']' 00:30:38.183 22:29:33 -- common/autotest_common.sh@930 -- # kill -0 3733273 00:30:38.183 22:29:33 -- common/autotest_common.sh@931 -- # uname 00:30:38.183 22:29:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:38.184 22:29:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3733273 00:30:38.184 22:29:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:38.184 22:29:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:38.184 22:29:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3733273' 00:30:38.184 killing process with pid 3733273 00:30:38.184 22:29:33 -- common/autotest_common.sh@945 -- # kill 3733273 00:30:38.184 Received shutdown signal, test time was about 2.000000 seconds 00:30:38.184 00:30:38.184 Latency(us) 00:30:38.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.184 =================================================================================================================== 00:30:38.184 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.184 22:29:33 -- common/autotest_common.sh@950 -- # wait 3733273 00:30:38.444 22:29:33 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:30:38.444 22:29:33 -- host/digest.sh@54 -- # local rw bs qd 00:30:38.444 22:29:33 -- host/digest.sh@56 -- # rw=randwrite 00:30:38.444 22:29:33 -- host/digest.sh@56 -- # bs=131072 00:30:38.444 22:29:33 -- host/digest.sh@56 -- # qd=16 00:30:38.444 22:29:33 -- host/digest.sh@58 -- # bperfpid=3733876 00:30:38.444 22:29:33 -- host/digest.sh@60 -- # waitforlisten 3733876 /var/tmp/bperf.sock 00:30:38.444 22:29:33 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:38.444 22:29:33 -- common/autotest_common.sh@819 -- # '[' -z 3733876 ']' 00:30:38.444 22:29:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:38.444 22:29:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:38.445 22:29:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:38.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:38.445 22:29:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:38.445 22:29:33 -- common/autotest_common.sh@10 -- # set +x 00:30:38.445 [2024-07-24 22:29:33.507488] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:38.445 [2024-07-24 22:29:33.507534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3733876 ] 00:30:38.445 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:38.445 Zero copy mechanism will not be used. 00:30:38.445 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.445 [2024-07-24 22:29:33.561985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.705 [2024-07-24 22:29:33.597466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:39.272 22:29:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:39.272 22:29:34 -- common/autotest_common.sh@852 -- # return 0 00:30:39.272 22:29:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:39.272 22:29:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:39.531 22:29:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:39.531 22:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:39.531 22:29:34 -- common/autotest_common.sh@10 -- # set +x 00:30:39.531 22:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:39.531 22:29:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:39.531 22:29:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:39.790 nvme0n1 00:30:39.790 22:29:34 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:39.790 22:29:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:39.790 22:29:34 -- common/autotest_common.sh@10 -- # set +x 00:30:39.790 22:29:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:39.790 22:29:34 -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:39.790 22:29:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:39.790 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:39.790 Zero copy mechanism will not be used. 00:30:39.790 Running I/O for 2 seconds... 00:30:39.791 [2024-07-24 22:29:34.860419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:39.791 [2024-07-24 22:29:34.860756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.791 [2024-07-24 22:29:34.860783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:39.791 [2024-07-24 22:29:34.880064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:39.791 [2024-07-24 22:29:34.880601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.791 [2024-07-24 22:29:34.880624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:39.791 [2024-07-24 22:29:34.899343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:39.791 [2024-07-24 22:29:34.899907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.791 [2024-07-24 22:29:34.899928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:39.791 [2024-07-24 22:29:34.921011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:39.791 [2024-07-24 22:29:34.921529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.791 [2024-07-24 22:29:34.921549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:34.941207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:34.941719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:34.941738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:34.962152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:34.962698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:34.962716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:34.981558] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:34.982025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:34.982047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.002781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.003556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.003575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.024097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.024731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.024751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.045579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.046087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.046106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.065970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.066423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.066442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.087288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.087746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.087765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.107262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.107799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.107818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.127982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.128522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.128541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.149428] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.150295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.150313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.170347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.171144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.171164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.189962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.190634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.190654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.081 [2024-07-24 22:29:35.210145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.081 [2024-07-24 22:29:35.210785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.081 [2024-07-24 22:29:35.210808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.340 [2024-07-24 22:29:35.231631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.340 [2024-07-24 22:29:35.232384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.340 [2024-07-24 22:29:35.232403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.340 [2024-07-24 22:29:35.252751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.340 [2024-07-24 22:29:35.253489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.340 [2024-07-24 22:29:35.253508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.340 [2024-07-24 22:29:35.274168] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.340 [2024-07-24 22:29:35.274923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.340 [2024-07-24 22:29:35.274942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.340 [2024-07-24 22:29:35.296821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.340 [2024-07-24 22:29:35.297376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.340 [2024-07-24 22:29:35.297395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.340 [2024-07-24 22:29:35.319112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.340 [2024-07-24 22:29:35.319759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.340 [2024-07-24 22:29:35.319777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.340 [2024-07-24 22:29:35.342167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.340 [2024-07-24 22:29:35.342835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.341 [2024-07-24 22:29:35.342854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.341 [2024-07-24 22:29:35.363271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.341 [2024-07-24 22:29:35.363824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.341 [2024-07-24 22:29:35.363842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.341 [2024-07-24 22:29:35.384272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.341 [2024-07-24 22:29:35.384673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.341 [2024-07-24 22:29:35.384691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.341 [2024-07-24 22:29:35.407411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.341 [2024-07-24 22:29:35.407886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.341 [2024-07-24 22:29:35.407904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.341 [2024-07-24 22:29:35.429127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.341 [2024-07-24 22:29:35.429663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.341 [2024-07-24 22:29:35.429682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.341 [2024-07-24 22:29:35.448970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.341 [2024-07-24 22:29:35.449516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.341 [2024-07-24 22:29:35.449535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.341 [2024-07-24 22:29:35.469535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.341 [2024-07-24 22:29:35.469988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.341 [2024-07-24 22:29:35.470007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.488798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.489371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.489390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.508126] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.508796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.508816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.528084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.528613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.528632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.549213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.549852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.549871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.572098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.572831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.572849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.594702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.595421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.595439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.616903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.617342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.617360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.638478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.639130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.639149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.659547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.660271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.660290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.682397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.683264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.683282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.600 [2024-07-24 22:29:35.705173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.600 [2024-07-24 22:29:35.705892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.600 [2024-07-24 22:29:35.705911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.601 [2024-07-24 22:29:35.726907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.601 [2024-07-24 22:29:35.727450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.601 [2024-07-24 22:29:35.727468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.749908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.750345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.750364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.772089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.772800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.772822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.795261] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.795811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.795830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.816083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.816789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.816808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.839185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.839836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.839855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.859247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.859842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.859860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.880780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.881456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.881475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.902380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.903008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.903027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.923883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.924415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.924433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.945298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.945878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.945896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.968906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.969477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.969497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:40.860 [2024-07-24 22:29:35.990426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:40.860 [2024-07-24 22:29:35.991059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:40.860 [2024-07-24 22:29:35.991077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.119 [2024-07-24 22:29:36.012521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.119 [2024-07-24 22:29:36.012955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.119 [2024-07-24 22:29:36.012973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.119 [2024-07-24 22:29:36.032596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.119 [2024-07-24 22:29:36.033390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.119 [2024-07-24 22:29:36.033409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.119 [2024-07-24 22:29:36.051396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.119 [2024-07-24 22:29:36.052117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.119 [2024-07-24 22:29:36.052135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.119 [2024-07-24 22:29:36.071730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.119 [2024-07-24 22:29:36.072364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.119 [2024-07-24 22:29:36.072384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.120 [2024-07-24 22:29:36.094427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.120 [2024-07-24 22:29:36.095073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.120 [2024-07-24 22:29:36.095091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.120 [2024-07-24 22:29:36.116631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.120 [2024-07-24 22:29:36.116965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.120 [2024-07-24 22:29:36.116984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.120 [2024-07-24 22:29:36.137030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.120 [2024-07-24 22:29:36.137736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.120 [2024-07-24 22:29:36.137754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.120 [2024-07-24 22:29:36.158185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.120 [2024-07-24 22:29:36.158906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.120 [2024-07-24 22:29:36.158924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.120 [2024-07-24 22:29:36.180177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.120 [2024-07-24 22:29:36.180737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.120 [2024-07-24 22:29:36.180755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.120 [2024-07-24 22:29:36.202425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.120 [2024-07-24 22:29:36.203046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.120 [2024-07-24 22:29:36.203065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.120 [2024-07-24 22:29:36.224695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.120 [2024-07-24 22:29:36.225258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.120 [2024-07-24 22:29:36.225277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.120 [2024-07-24 22:29:36.248430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.120 [2024-07-24 22:29:36.248892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.120 [2024-07-24 22:29:36.248911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.270851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.271412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.271430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.293376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.293999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.294017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.316161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.316726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.316744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.338127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.338639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.338658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.359803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.360253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.360271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.379271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.380112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.380130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.400284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.400919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.400938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.423031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.423469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.423488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.443083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.443639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.443658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.464928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.465580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.379 [2024-07-24 22:29:36.465599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.379 [2024-07-24 22:29:36.487003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.379 [2024-07-24 22:29:36.487725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.380 [2024-07-24 22:29:36.487745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.380 [2024-07-24 22:29:36.509139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.380 [2024-07-24 22:29:36.509767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.380 [2024-07-24 22:29:36.509786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.532165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.532545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.532564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.555296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.556162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.556181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.577815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.578230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.578249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.599950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.600513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.600532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.623549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.624204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.624223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.647311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.648051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.648070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.670326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.670890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.670908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.692641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.693256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.693275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.714763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.715151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.715174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.733062] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.733764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.733783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.639 [2024-07-24 22:29:36.752912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.639 [2024-07-24 22:29:36.753648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.639 [2024-07-24 22:29:36.753667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:41.898 [2024-07-24 22:29:36.775840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.898 [2024-07-24 22:29:36.776659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.898 [2024-07-24 22:29:36.776678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:41.898 [2024-07-24 22:29:36.799666] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.899 [2024-07-24 22:29:36.800360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.899 [2024-07-24 22:29:36.800378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:41.899 [2024-07-24 22:29:36.822971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15d3c50) with pdu=0x2000190fef90 00:30:41.899 [2024-07-24 22:29:36.823654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:41.899 [2024-07-24 22:29:36.823673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:41.899 00:30:41.899 Latency(us) 00:30:41.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:41.899 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:41.899 nvme0n1 : 2.01 1423.00 177.87 0.00 0.00 11207.71 7636.37 33736.79 00:30:41.899 =================================================================================================================== 00:30:41.899 Total : 1423.00 177.87 0.00 0.00 11207.71 7636.37 33736.79 00:30:41.899 0 00:30:41.899 22:29:36 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:41.899 22:29:36 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:41.899 22:29:36 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:41.899 | .driver_specific 00:30:41.899 | .nvme_error 00:30:41.899 | .status_code 00:30:41.899 | .command_transient_transport_error' 00:30:41.899 22:29:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:41.899 22:29:37 -- host/digest.sh@71 -- # (( 92 > 0 )) 00:30:41.899 22:29:37 -- host/digest.sh@73 -- # killprocess 3733876 00:30:41.899 22:29:37 -- common/autotest_common.sh@926 -- # '[' -z 3733876 ']' 00:30:41.899 22:29:37 -- common/autotest_common.sh@930 -- # kill -0 3733876 00:30:41.899 22:29:37 -- common/autotest_common.sh@931 -- # uname 00:30:42.158 22:29:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:42.158 22:29:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3733876 00:30:42.158 22:29:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:42.158 22:29:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:42.158 22:29:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3733876' 00:30:42.158 killing process with pid 3733876 00:30:42.158 22:29:37 -- common/autotest_common.sh@945 -- # kill 3733876 00:30:42.158 Received shutdown signal, test time was about 2.000000 seconds 00:30:42.158 00:30:42.158 Latency(us) 00:30:42.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.158 =================================================================================================================== 00:30:42.158 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:42.158 22:29:37 -- common/autotest_common.sh@950 -- # wait 3733876 00:30:42.158 22:29:37 -- host/digest.sh@115 -- # killprocess 3731948 00:30:42.158 22:29:37 -- common/autotest_common.sh@926 -- # '[' -z 3731948 ']' 00:30:42.158 22:29:37 -- common/autotest_common.sh@930 -- # kill -0 3731948 00:30:42.158 22:29:37 -- common/autotest_common.sh@931 -- # uname 00:30:42.158 22:29:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:42.158 22:29:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3731948 00:30:42.158 22:29:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:42.158 22:29:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:42.158 22:29:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3731948' 00:30:42.158 killing process with pid 3731948 00:30:42.158 22:29:37 -- common/autotest_common.sh@945 -- # kill 3731948 00:30:42.158 22:29:37 -- common/autotest_common.sh@950 -- # wait 3731948 00:30:42.417 00:30:42.417 real 0m15.928s 00:30:42.417 user 0m31.958s 00:30:42.417 sys 0m3.393s 00:30:42.417 22:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.417 22:29:37 -- common/autotest_common.sh@10 -- # set +x 00:30:42.417 ************************************ 00:30:42.417 END TEST nvmf_digest_error 00:30:42.417 ************************************ 00:30:42.417 22:29:37 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:30:42.417 22:29:37 -- host/digest.sh@139 -- # nvmftestfini 00:30:42.417 22:29:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:42.417 22:29:37 -- nvmf/common.sh@116 -- # sync 00:30:42.417 22:29:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:42.417 22:29:37 -- nvmf/common.sh@119 -- # set +e 00:30:42.417 22:29:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:42.417 22:29:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:42.417 rmmod nvme_tcp 00:30:42.417 rmmod nvme_fabrics 00:30:42.417 rmmod nvme_keyring 00:30:42.676 22:29:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:42.676 22:29:37 -- nvmf/common.sh@123 -- # set -e 00:30:42.676 22:29:37 -- nvmf/common.sh@124 -- # return 0 00:30:42.676 22:29:37 -- nvmf/common.sh@477 -- # '[' -n 3731948 ']' 00:30:42.676 22:29:37 -- nvmf/common.sh@478 -- # killprocess 3731948 00:30:42.676 22:29:37 -- common/autotest_common.sh@926 -- # '[' -z 3731948 ']' 00:30:42.676 22:29:37 -- common/autotest_common.sh@930 -- # kill -0 3731948 00:30:42.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3731948) - No such process 00:30:42.676 22:29:37 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3731948 is not found' 00:30:42.676 Process with pid 3731948 is not found 00:30:42.676 22:29:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:42.676 22:29:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:42.676 22:29:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:42.676 22:29:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:42.676 22:29:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:42.676 22:29:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.676 22:29:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:42.676 22:29:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.582 22:29:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:44.582 00:30:44.582 real 0m37.037s 00:30:44.582 user 1m0.177s 00:30:44.582 sys 0m10.674s 00:30:44.582 22:29:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:44.582 22:29:39 -- common/autotest_common.sh@10 -- # set +x 00:30:44.582 ************************************ 00:30:44.582 END TEST nvmf_digest 00:30:44.582 ************************************ 00:30:44.582 22:29:39 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:30:44.582 22:29:39 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:30:44.582 22:29:39 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:30:44.582 22:29:39 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:44.582 22:29:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:44.582 22:29:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:44.582 22:29:39 -- common/autotest_common.sh@10 -- # set +x 00:30:44.582 ************************************ 00:30:44.582 START TEST nvmf_bdevperf 00:30:44.582 ************************************ 00:30:44.582 22:29:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:44.841 * Looking for test storage... 00:30:44.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:44.841 22:29:39 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.841 22:29:39 -- nvmf/common.sh@7 -- # uname -s 00:30:44.841 22:29:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.841 22:29:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.841 22:29:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.841 22:29:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.841 22:29:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:44.841 22:29:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:44.841 22:29:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.841 22:29:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:44.841 22:29:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.841 22:29:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:44.841 22:29:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:44.841 22:29:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:44.841 22:29:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.841 22:29:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:44.841 22:29:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:44.841 22:29:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.841 22:29:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.841 22:29:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.841 22:29:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.841 22:29:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.841 22:29:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.841 22:29:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.841 22:29:39 -- paths/export.sh@5 -- # export PATH 00:30:44.841 22:29:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.841 22:29:39 -- nvmf/common.sh@46 -- # : 0 00:30:44.841 22:29:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:44.841 22:29:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:44.841 22:29:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:44.841 22:29:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.841 22:29:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.841 22:29:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:44.841 22:29:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:44.841 22:29:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:44.841 22:29:39 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:44.841 22:29:39 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:44.841 22:29:39 -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:44.841 22:29:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:44.842 22:29:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.842 22:29:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:44.842 22:29:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:44.842 22:29:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:44.842 22:29:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:44.842 22:29:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:44.842 22:29:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:44.842 22:29:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:44.842 22:29:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:44.842 22:29:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:44.842 22:29:39 -- common/autotest_common.sh@10 -- # set +x 00:30:50.111 22:29:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:50.111 22:29:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:50.111 22:29:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:50.111 22:29:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:50.111 22:29:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:50.111 22:29:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:50.111 22:29:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:50.111 22:29:44 -- nvmf/common.sh@294 -- # net_devs=() 00:30:50.111 22:29:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:50.111 22:29:44 -- nvmf/common.sh@295 -- # e810=() 00:30:50.111 22:29:44 -- nvmf/common.sh@295 -- # local -ga e810 00:30:50.111 22:29:44 -- nvmf/common.sh@296 -- # x722=() 00:30:50.111 22:29:44 -- nvmf/common.sh@296 -- # local -ga x722 00:30:50.111 22:29:44 -- nvmf/common.sh@297 -- # mlx=() 00:30:50.111 22:29:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:50.111 22:29:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.111 22:29:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:50.111 22:29:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:50.111 22:29:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:50.111 22:29:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:50.111 22:29:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:50.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:50.111 22:29:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:50.111 22:29:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:50.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:50.111 22:29:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:50.111 22:29:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:50.111 22:29:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.111 22:29:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:50.111 22:29:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.111 22:29:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:50.111 Found net devices under 0000:86:00.0: cvl_0_0 00:30:50.111 22:29:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.111 22:29:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:50.111 22:29:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.111 22:29:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:50.111 22:29:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.111 22:29:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:50.111 Found net devices under 0000:86:00.1: cvl_0_1 00:30:50.111 22:29:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.111 22:29:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:50.111 22:29:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:50.111 22:29:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:50.111 22:29:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:50.111 22:29:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.111 22:29:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.111 22:29:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.111 22:29:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:50.111 22:29:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.111 22:29:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.111 22:29:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:50.111 22:29:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.111 22:29:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.111 22:29:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:50.111 22:29:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:50.111 22:29:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.111 22:29:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.112 22:29:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.112 22:29:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.112 22:29:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:50.112 22:29:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.112 22:29:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.112 22:29:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.112 22:29:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:50.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:30:50.112 00:30:50.112 --- 10.0.0.2 ping statistics --- 00:30:50.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.112 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:50.112 22:29:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:30:50.112 00:30:50.112 --- 10.0.0.1 ping statistics --- 00:30:50.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.112 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:30:50.112 22:29:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.112 22:29:45 -- nvmf/common.sh@410 -- # return 0 00:30:50.112 22:29:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:50.112 22:29:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.112 22:29:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:50.112 22:29:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:50.112 22:29:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.112 22:29:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:50.112 22:29:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:50.112 22:29:45 -- host/bdevperf.sh@25 -- # tgt_init 00:30:50.112 22:29:45 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:50.112 22:29:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:50.112 22:29:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:50.112 22:29:45 -- common/autotest_common.sh@10 -- # set +x 00:30:50.112 22:29:45 -- nvmf/common.sh@469 -- # nvmfpid=3737956 00:30:50.112 22:29:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:50.112 22:29:45 -- nvmf/common.sh@470 -- # waitforlisten 3737956 00:30:50.112 22:29:45 -- common/autotest_common.sh@819 -- # '[' -z 3737956 ']' 00:30:50.112 22:29:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.112 22:29:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:50.112 22:29:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.112 22:29:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:50.112 22:29:45 -- common/autotest_common.sh@10 -- # set +x 00:30:50.112 [2024-07-24 22:29:45.206058] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:50.112 [2024-07-24 22:29:45.206100] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.112 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.371 [2024-07-24 22:29:45.263568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:50.371 [2024-07-24 22:29:45.302917] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:50.371 [2024-07-24 22:29:45.303038] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.371 [2024-07-24 22:29:45.303052] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.371 [2024-07-24 22:29:45.303059] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.371 [2024-07-24 22:29:45.303180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.371 [2024-07-24 22:29:45.303207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.371 [2024-07-24 22:29:45.303208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.938 22:29:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:50.938 22:29:46 -- common/autotest_common.sh@852 -- # return 0 00:30:50.938 22:29:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:50.938 22:29:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:50.938 22:29:46 -- common/autotest_common.sh@10 -- # set +x 00:30:50.938 22:29:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.938 22:29:46 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.938 22:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.938 22:29:46 -- common/autotest_common.sh@10 -- # set +x 00:30:50.938 [2024-07-24 22:29:46.058035] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.938 22:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:50.938 22:29:46 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:50.938 22:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.938 22:29:46 -- common/autotest_common.sh@10 -- # set +x 00:30:51.197 Malloc0 00:30:51.197 22:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.197 22:29:46 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:51.197 22:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.197 22:29:46 -- common/autotest_common.sh@10 -- # set +x 00:30:51.197 22:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.197 22:29:46 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.197 22:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.197 22:29:46 -- common/autotest_common.sh@10 -- # set +x 00:30:51.197 22:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.197 22:29:46 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.197 22:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:51.197 22:29:46 -- common/autotest_common.sh@10 -- # set +x 00:30:51.197 [2024-07-24 22:29:46.126732] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.197 22:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:51.197 22:29:46 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:51.197 22:29:46 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:51.197 22:29:46 -- nvmf/common.sh@520 -- # config=() 00:30:51.197 22:29:46 -- nvmf/common.sh@520 -- # local subsystem config 00:30:51.197 22:29:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:51.197 22:29:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:51.197 { 00:30:51.197 "params": { 00:30:51.197 "name": "Nvme$subsystem", 00:30:51.197 "trtype": "$TEST_TRANSPORT", 00:30:51.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.197 "adrfam": "ipv4", 00:30:51.197 "trsvcid": "$NVMF_PORT", 00:30:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.197 "hdgst": ${hdgst:-false}, 00:30:51.197 "ddgst": ${ddgst:-false} 00:30:51.197 }, 00:30:51.197 "method": "bdev_nvme_attach_controller" 00:30:51.197 } 00:30:51.197 EOF 00:30:51.197 )") 00:30:51.197 22:29:46 -- nvmf/common.sh@542 -- # cat 00:30:51.197 22:29:46 -- nvmf/common.sh@544 -- # jq . 00:30:51.197 22:29:46 -- nvmf/common.sh@545 -- # IFS=, 00:30:51.197 22:29:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:51.197 "params": { 00:30:51.197 "name": "Nvme1", 00:30:51.197 "trtype": "tcp", 00:30:51.197 "traddr": "10.0.0.2", 00:30:51.197 "adrfam": "ipv4", 00:30:51.197 "trsvcid": "4420", 00:30:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.197 "hdgst": false, 00:30:51.197 "ddgst": false 00:30:51.197 }, 00:30:51.197 "method": "bdev_nvme_attach_controller" 00:30:51.197 }' 00:30:51.197 [2024-07-24 22:29:46.174444] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:51.197 [2024-07-24 22:29:46.174487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738159 ] 00:30:51.197 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.197 [2024-07-24 22:29:46.229815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.197 [2024-07-24 22:29:46.268202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.455 Running I/O for 1 seconds... 00:30:52.391 00:30:52.391 Latency(us) 00:30:52.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:52.391 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:52.391 Verification LBA range: start 0x0 length 0x4000 00:30:52.391 Nvme1n1 : 1.00 16626.36 64.95 0.00 0.00 7667.24 1025.78 25986.45 00:30:52.391 =================================================================================================================== 00:30:52.391 Total : 16626.36 64.95 0.00 0.00 7667.24 1025.78 25986.45 00:30:52.650 22:29:47 -- host/bdevperf.sh@30 -- # bdevperfpid=3738401 00:30:52.650 22:29:47 -- host/bdevperf.sh@32 -- # sleep 3 00:30:52.650 22:29:47 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:52.650 22:29:47 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:52.650 22:29:47 -- nvmf/common.sh@520 -- # config=() 00:30:52.650 22:29:47 -- nvmf/common.sh@520 -- # local subsystem config 00:30:52.650 22:29:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:52.650 22:29:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:52.650 { 00:30:52.650 "params": { 00:30:52.650 "name": "Nvme$subsystem", 00:30:52.650 "trtype": "$TEST_TRANSPORT", 00:30:52.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:52.650 "adrfam": "ipv4", 00:30:52.650 "trsvcid": "$NVMF_PORT", 00:30:52.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:52.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:52.650 "hdgst": ${hdgst:-false}, 00:30:52.650 "ddgst": ${ddgst:-false} 00:30:52.650 }, 00:30:52.650 "method": "bdev_nvme_attach_controller" 00:30:52.650 } 00:30:52.650 EOF 00:30:52.650 )") 00:30:52.650 22:29:47 -- nvmf/common.sh@542 -- # cat 00:30:52.650 22:29:47 -- nvmf/common.sh@544 -- # jq . 00:30:52.650 22:29:47 -- nvmf/common.sh@545 -- # IFS=, 00:30:52.650 22:29:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:52.650 "params": { 00:30:52.650 "name": "Nvme1", 00:30:52.650 "trtype": "tcp", 00:30:52.650 "traddr": "10.0.0.2", 00:30:52.650 "adrfam": "ipv4", 00:30:52.650 "trsvcid": "4420", 00:30:52.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:52.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:52.650 "hdgst": false, 00:30:52.650 "ddgst": false 00:30:52.650 }, 00:30:52.650 "method": "bdev_nvme_attach_controller" 00:30:52.650 }' 00:30:52.650 [2024-07-24 22:29:47.681340] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:52.650 [2024-07-24 22:29:47.681390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3738401 ] 00:30:52.650 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.650 [2024-07-24 22:29:47.737046] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.650 [2024-07-24 22:29:47.772060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.909 Running I/O for 15 seconds... 00:30:56.207 22:29:50 -- host/bdevperf.sh@33 -- # kill -9 3737956 00:30:56.207 22:29:50 -- host/bdevperf.sh@35 -- # sleep 3 00:30:56.207 [2024-07-24 22:29:50.653977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.207 [2024-07-24 22:29:50.654494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.207 [2024-07-24 22:29:50.654539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.207 [2024-07-24 22:29:50.654547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.208 [2024-07-24 22:29:50.654950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.654988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.654994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.655002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.655009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.655017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.655024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.655032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.655038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.655051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.655058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.655066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.208 [2024-07-24 22:29:50.655072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.208 [2024-07-24 22:29:50.655080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.209 [2024-07-24 22:29:50.655584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.209 [2024-07-24 22:29:50.655598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.209 [2024-07-24 22:29:50.655606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.210 [2024-07-24 22:29:50.655744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.210 [2024-07-24 22:29:50.655817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.210 [2024-07-24 22:29:50.655831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.210 [2024-07-24 22:29:50.655933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.655941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1501ce0 is same with the state(5) to be set 00:30:56.210 [2024-07-24 22:29:50.655949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.210 [2024-07-24 22:29:50.655954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.210 [2024-07-24 22:29:50.655960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81048 len:8 PRP1 0x0 PRP2 0x0 00:30:56.210 [2024-07-24 22:29:50.655970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.210 [2024-07-24 22:29:50.656013] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1501ce0 was disconnected and freed. reset controller. 00:30:56.210 [2024-07-24 22:29:50.658260] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.210 [2024-07-24 22:29:50.658315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.210 [2024-07-24 22:29:50.659057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.210 [2024-07-24 22:29:50.659586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.210 [2024-07-24 22:29:50.659596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.210 [2024-07-24 22:29:50.659604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.210 [2024-07-24 22:29:50.659707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.210 [2024-07-24 22:29:50.659854] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.210 [2024-07-24 22:29:50.659862] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.210 [2024-07-24 22:29:50.659873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.210 [2024-07-24 22:29:50.661710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.210 [2024-07-24 22:29:50.670252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.210 [2024-07-24 22:29:50.670855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.210 [2024-07-24 22:29:50.671377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.210 [2024-07-24 22:29:50.671413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.210 [2024-07-24 22:29:50.671435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.210 [2024-07-24 22:29:50.671859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.210 [2024-07-24 22:29:50.671915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.210 [2024-07-24 22:29:50.671923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.210 [2024-07-24 22:29:50.671930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.210 [2024-07-24 22:29:50.673778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.210 [2024-07-24 22:29:50.682216] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.210 [2024-07-24 22:29:50.682849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.210 [2024-07-24 22:29:50.683434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.210 [2024-07-24 22:29:50.683470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.210 [2024-07-24 22:29:50.683493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.210 [2024-07-24 22:29:50.683975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.210 [2024-07-24 22:29:50.684123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.210 [2024-07-24 22:29:50.684131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.210 [2024-07-24 22:29:50.684138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.210 [2024-07-24 22:29:50.685955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.210 [2024-07-24 22:29:50.694131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.210 [2024-07-24 22:29:50.694797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.210 [2024-07-24 22:29:50.695296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.695312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.211 [2024-07-24 22:29:50.695322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.211 [2024-07-24 22:29:50.695489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.211 [2024-07-24 22:29:50.695677] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.211 [2024-07-24 22:29:50.695688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.211 [2024-07-24 22:29:50.695697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.211 [2024-07-24 22:29:50.698458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.211 [2024-07-24 22:29:50.706185] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.211 [2024-07-24 22:29:50.706758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.707226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.707261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.211 [2024-07-24 22:29:50.707283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.211 [2024-07-24 22:29:50.707713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.211 [2024-07-24 22:29:50.707855] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.211 [2024-07-24 22:29:50.707863] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.211 [2024-07-24 22:29:50.707869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.211 [2024-07-24 22:29:50.709593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.211 [2024-07-24 22:29:50.718114] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.211 [2024-07-24 22:29:50.718741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.719266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.719301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.211 [2024-07-24 22:29:50.719322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.211 [2024-07-24 22:29:50.719653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.211 [2024-07-24 22:29:50.719928] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.211 [2024-07-24 22:29:50.719936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.211 [2024-07-24 22:29:50.719942] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.211 [2024-07-24 22:29:50.721733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.211 [2024-07-24 22:29:50.729997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.211 [2024-07-24 22:29:50.730611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.731123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.731158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.211 [2024-07-24 22:29:50.731179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.211 [2024-07-24 22:29:50.731480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.211 [2024-07-24 22:29:50.731587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.211 [2024-07-24 22:29:50.731595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.211 [2024-07-24 22:29:50.731600] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.211 [2024-07-24 22:29:50.733403] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.211 [2024-07-24 22:29:50.741725] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.211 [2024-07-24 22:29:50.742325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.742827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.742857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.211 [2024-07-24 22:29:50.742890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.211 [2024-07-24 22:29:50.743004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.211 [2024-07-24 22:29:50.743152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.211 [2024-07-24 22:29:50.743161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.211 [2024-07-24 22:29:50.743167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.211 [2024-07-24 22:29:50.744901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.211 [2024-07-24 22:29:50.753590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.211 [2024-07-24 22:29:50.754231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.754744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.754775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.211 [2024-07-24 22:29:50.754796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.211 [2024-07-24 22:29:50.755147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.211 [2024-07-24 22:29:50.755284] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.211 [2024-07-24 22:29:50.755292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.211 [2024-07-24 22:29:50.755298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.211 [2024-07-24 22:29:50.757124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.211 [2024-07-24 22:29:50.765413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.211 [2024-07-24 22:29:50.765998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.766528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.211 [2024-07-24 22:29:50.766562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.211 [2024-07-24 22:29:50.766584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.211 [2024-07-24 22:29:50.767076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.211 [2024-07-24 22:29:50.767217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.211 [2024-07-24 22:29:50.767225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.211 [2024-07-24 22:29:50.767231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.211 [2024-07-24 22:29:50.768941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.211 [2024-07-24 22:29:50.777328] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.211 [2024-07-24 22:29:50.777988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.778546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.778581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.212 [2024-07-24 22:29:50.778602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.212 [2024-07-24 22:29:50.778833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.212 [2024-07-24 22:29:50.778985] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.212 [2024-07-24 22:29:50.778992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.212 [2024-07-24 22:29:50.778998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.212 [2024-07-24 22:29:50.780674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.212 [2024-07-24 22:29:50.789101] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.212 [2024-07-24 22:29:50.789738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.790136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.790168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.212 [2024-07-24 22:29:50.790190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.212 [2024-07-24 22:29:50.790391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.212 [2024-07-24 22:29:50.790505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.212 [2024-07-24 22:29:50.790513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.212 [2024-07-24 22:29:50.790519] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.212 [2024-07-24 22:29:50.792299] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.212 [2024-07-24 22:29:50.801053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.212 [2024-07-24 22:29:50.801685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.802161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.802172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.212 [2024-07-24 22:29:50.802179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.212 [2024-07-24 22:29:50.802313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.212 [2024-07-24 22:29:50.802463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.212 [2024-07-24 22:29:50.802470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.212 [2024-07-24 22:29:50.802476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.212 [2024-07-24 22:29:50.804211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.212 [2024-07-24 22:29:50.812934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.212 [2024-07-24 22:29:50.813587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.814352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.814394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.212 [2024-07-24 22:29:50.814417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.212 [2024-07-24 22:29:50.814747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.212 [2024-07-24 22:29:50.815027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.212 [2024-07-24 22:29:50.815069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.212 [2024-07-24 22:29:50.815100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.212 [2024-07-24 22:29:50.816990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.212 [2024-07-24 22:29:50.824853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.212 [2024-07-24 22:29:50.825452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.825809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.825839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.212 [2024-07-24 22:29:50.825861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.212 [2024-07-24 22:29:50.826155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.212 [2024-07-24 22:29:50.826418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.212 [2024-07-24 22:29:50.826429] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.212 [2024-07-24 22:29:50.826438] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.212 [2024-07-24 22:29:50.828979] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.212 [2024-07-24 22:29:50.837220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.212 [2024-07-24 22:29:50.837841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.838357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.838392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.212 [2024-07-24 22:29:50.838414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.212 [2024-07-24 22:29:50.838697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.212 [2024-07-24 22:29:50.839137] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.212 [2024-07-24 22:29:50.839164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.212 [2024-07-24 22:29:50.839184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.212 [2024-07-24 22:29:50.841161] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.212 [2024-07-24 22:29:50.849304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.212 [2024-07-24 22:29:50.849785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.850304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.850314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.212 [2024-07-24 22:29:50.850325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.212 [2024-07-24 22:29:50.850472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.212 [2024-07-24 22:29:50.850590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.212 [2024-07-24 22:29:50.850598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.212 [2024-07-24 22:29:50.850604] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.212 [2024-07-24 22:29:50.852397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.212 [2024-07-24 22:29:50.861346] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.212 [2024-07-24 22:29:50.862002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.862582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.212 [2024-07-24 22:29:50.862614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.212 [2024-07-24 22:29:50.862636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.212 [2024-07-24 22:29:50.863078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.212 [2024-07-24 22:29:50.863315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.212 [2024-07-24 22:29:50.863323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.212 [2024-07-24 22:29:50.863329] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.212 [2024-07-24 22:29:50.865066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.212 [2024-07-24 22:29:50.873177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.212 [2024-07-24 22:29:50.873718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.874189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.874222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.213 [2024-07-24 22:29:50.874244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.213 [2024-07-24 22:29:50.874575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.213 [2024-07-24 22:29:50.875069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.213 [2024-07-24 22:29:50.875093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.213 [2024-07-24 22:29:50.875114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.213 [2024-07-24 22:29:50.876999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.213 [2024-07-24 22:29:50.885034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.213 [2024-07-24 22:29:50.885655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.886184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.886220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.213 [2024-07-24 22:29:50.886242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.213 [2024-07-24 22:29:50.886681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.213 [2024-07-24 22:29:50.887122] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.213 [2024-07-24 22:29:50.887147] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.213 [2024-07-24 22:29:50.887167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.213 [2024-07-24 22:29:50.889912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.213 [2024-07-24 22:29:50.897647] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.213 [2024-07-24 22:29:50.898234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.898741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.898771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.213 [2024-07-24 22:29:50.898794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.213 [2024-07-24 22:29:50.899083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.213 [2024-07-24 22:29:50.899351] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.213 [2024-07-24 22:29:50.899360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.213 [2024-07-24 22:29:50.899366] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.213 [2024-07-24 22:29:50.901072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.213 [2024-07-24 22:29:50.909567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.213 [2024-07-24 22:29:50.910205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.910641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.910672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.213 [2024-07-24 22:29:50.910695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.213 [2024-07-24 22:29:50.911026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.213 [2024-07-24 22:29:50.911280] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.213 [2024-07-24 22:29:50.911289] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.213 [2024-07-24 22:29:50.911295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.213 [2024-07-24 22:29:50.913203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.213 [2024-07-24 22:29:50.921448] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.213 [2024-07-24 22:29:50.922020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.922438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.922468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.213 [2024-07-24 22:29:50.922475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.213 [2024-07-24 22:29:50.922591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.213 [2024-07-24 22:29:50.922726] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.213 [2024-07-24 22:29:50.922734] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.213 [2024-07-24 22:29:50.922741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.213 [2024-07-24 22:29:50.924456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.213 [2024-07-24 22:29:50.933517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.213 [2024-07-24 22:29:50.934086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.935172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.935194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.213 [2024-07-24 22:29:50.935203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.213 [2024-07-24 22:29:50.935313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.213 [2024-07-24 22:29:50.935446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.213 [2024-07-24 22:29:50.935454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.213 [2024-07-24 22:29:50.935460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.213 [2024-07-24 22:29:50.937275] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.213 [2024-07-24 22:29:50.945240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.213 [2024-07-24 22:29:50.945784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.946275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.946308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.213 [2024-07-24 22:29:50.946342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.213 [2024-07-24 22:29:50.946456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.213 [2024-07-24 22:29:50.946569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.213 [2024-07-24 22:29:50.946577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.213 [2024-07-24 22:29:50.946584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.213 [2024-07-24 22:29:50.948252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.213 [2024-07-24 22:29:50.957124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.213 [2024-07-24 22:29:50.957815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.958328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.958364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.213 [2024-07-24 22:29:50.958387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.213 [2024-07-24 22:29:50.958696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.213 [2024-07-24 22:29:50.958824] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.213 [2024-07-24 22:29:50.958835] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.213 [2024-07-24 22:29:50.958841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.213 [2024-07-24 22:29:50.960593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.213 [2024-07-24 22:29:50.969180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.213 [2024-07-24 22:29:50.969706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.970193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.970225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.213 [2024-07-24 22:29:50.970247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.213 [2024-07-24 22:29:50.970499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.213 [2024-07-24 22:29:50.970628] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.213 [2024-07-24 22:29:50.970635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.213 [2024-07-24 22:29:50.970641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.213 [2024-07-24 22:29:50.972339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.213 [2024-07-24 22:29:50.980936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.213 [2024-07-24 22:29:50.981568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.982145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.213 [2024-07-24 22:29:50.982177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:50.982199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:50.982427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.214 [2024-07-24 22:29:50.982747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.214 [2024-07-24 22:29:50.982755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.214 [2024-07-24 22:29:50.982761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.214 [2024-07-24 22:29:50.984485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.214 [2024-07-24 22:29:50.992800] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.214 [2024-07-24 22:29:50.993434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:50.993835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:50.993866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:50.993887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:50.994228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.214 [2024-07-24 22:29:50.994610] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.214 [2024-07-24 22:29:50.994633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.214 [2024-07-24 22:29:50.994661] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.214 [2024-07-24 22:29:50.996585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.214 [2024-07-24 22:29:51.004473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.214 [2024-07-24 22:29:51.005087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.005545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.005576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:51.005597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:51.005977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.214 [2024-07-24 22:29:51.006197] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.214 [2024-07-24 22:29:51.006205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.214 [2024-07-24 22:29:51.006211] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.214 [2024-07-24 22:29:51.007897] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.214 [2024-07-24 22:29:51.016290] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.214 [2024-07-24 22:29:51.016787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.017316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.017351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:51.017372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:51.017703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.214 [2024-07-24 22:29:51.017808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.214 [2024-07-24 22:29:51.017815] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.214 [2024-07-24 22:29:51.017821] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.214 [2024-07-24 22:29:51.019663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.214 [2024-07-24 22:29:51.028139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.214 [2024-07-24 22:29:51.028746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.029277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.029309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:51.029330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:51.029808] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.214 [2024-07-24 22:29:51.029974] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.214 [2024-07-24 22:29:51.029982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.214 [2024-07-24 22:29:51.029988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.214 [2024-07-24 22:29:51.031782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.214 [2024-07-24 22:29:51.039936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.214 [2024-07-24 22:29:51.040496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.041039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.041083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:51.041104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:51.041572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.214 [2024-07-24 22:29:51.041641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.214 [2024-07-24 22:29:51.041649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.214 [2024-07-24 22:29:51.041655] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.214 [2024-07-24 22:29:51.043325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.214 [2024-07-24 22:29:51.051894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.214 [2024-07-24 22:29:51.052507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.053033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.053073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:51.053094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:51.053514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.214 [2024-07-24 22:29:51.053627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.214 [2024-07-24 22:29:51.053634] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.214 [2024-07-24 22:29:51.053640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.214 [2024-07-24 22:29:51.055394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.214 [2024-07-24 22:29:51.063856] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.214 [2024-07-24 22:29:51.064506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.065059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.065090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:51.065112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:51.065599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.214 [2024-07-24 22:29:51.065728] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.214 [2024-07-24 22:29:51.065736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.214 [2024-07-24 22:29:51.065742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.214 [2024-07-24 22:29:51.067642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.214 [2024-07-24 22:29:51.075825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.214 [2024-07-24 22:29:51.076525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.077120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.077151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:51.077173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:51.077502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.214 [2024-07-24 22:29:51.077832] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.214 [2024-07-24 22:29:51.077856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.214 [2024-07-24 22:29:51.077876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.214 [2024-07-24 22:29:51.079775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.214 [2024-07-24 22:29:51.087691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.214 [2024-07-24 22:29:51.088238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.089383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.214 [2024-07-24 22:29:51.089404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.214 [2024-07-24 22:29:51.089412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.214 [2024-07-24 22:29:51.089503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.215 [2024-07-24 22:29:51.089632] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.215 [2024-07-24 22:29:51.089640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.215 [2024-07-24 22:29:51.089646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.215 [2024-07-24 22:29:51.091189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.215 [2024-07-24 22:29:51.099750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.215 [2024-07-24 22:29:51.100743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.101200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.101213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.215 [2024-07-24 22:29:51.101221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.215 [2024-07-24 22:29:51.101339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.215 [2024-07-24 22:29:51.101467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.215 [2024-07-24 22:29:51.101475] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.215 [2024-07-24 22:29:51.101481] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.215 [2024-07-24 22:29:51.103283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.215 [2024-07-24 22:29:51.111656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.215 [2024-07-24 22:29:51.112269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.112758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.112789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.215 [2024-07-24 22:29:51.112811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.215 [2024-07-24 22:29:51.112974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.215 [2024-07-24 22:29:51.113080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.215 [2024-07-24 22:29:51.113088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.215 [2024-07-24 22:29:51.113095] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.215 [2024-07-24 22:29:51.114894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.215 [2024-07-24 22:29:51.123468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.215 [2024-07-24 22:29:51.124138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.124552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.124583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.215 [2024-07-24 22:29:51.124604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.215 [2024-07-24 22:29:51.124983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.215 [2024-07-24 22:29:51.125167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.215 [2024-07-24 22:29:51.125175] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.215 [2024-07-24 22:29:51.125181] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.215 [2024-07-24 22:29:51.127009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.215 [2024-07-24 22:29:51.135501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.215 [2024-07-24 22:29:51.136064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.136472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.136482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.215 [2024-07-24 22:29:51.136489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.215 [2024-07-24 22:29:51.136606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.215 [2024-07-24 22:29:51.136723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.215 [2024-07-24 22:29:51.136730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.215 [2024-07-24 22:29:51.136737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.215 [2024-07-24 22:29:51.138587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.215 [2024-07-24 22:29:51.147242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.215 [2024-07-24 22:29:51.147802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.148308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.148350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.215 [2024-07-24 22:29:51.148373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.215 [2024-07-24 22:29:51.148756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.215 [2024-07-24 22:29:51.149148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.215 [2024-07-24 22:29:51.149180] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.215 [2024-07-24 22:29:51.149187] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.215 [2024-07-24 22:29:51.151003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.215 [2024-07-24 22:29:51.159118] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.215 [2024-07-24 22:29:51.159709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.160187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.160220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.215 [2024-07-24 22:29:51.160241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.215 [2024-07-24 22:29:51.160650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.215 [2024-07-24 22:29:51.160838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.215 [2024-07-24 22:29:51.160849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.215 [2024-07-24 22:29:51.160858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.215 [2024-07-24 22:29:51.163377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.215 [2024-07-24 22:29:51.171769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.215 [2024-07-24 22:29:51.172067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.172468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.172478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.215 [2024-07-24 22:29:51.172485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.215 [2024-07-24 22:29:51.172571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.215 [2024-07-24 22:29:51.172688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.215 [2024-07-24 22:29:51.172696] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.215 [2024-07-24 22:29:51.172702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.215 [2024-07-24 22:29:51.174506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.215 [2024-07-24 22:29:51.183857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.215 [2024-07-24 22:29:51.184443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.184930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.184941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.215 [2024-07-24 22:29:51.184951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.215 [2024-07-24 22:29:51.185077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.215 [2024-07-24 22:29:51.185179] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.215 [2024-07-24 22:29:51.185187] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.215 [2024-07-24 22:29:51.185193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.215 [2024-07-24 22:29:51.187059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.215 [2024-07-24 22:29:51.195884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.215 [2024-07-24 22:29:51.196520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.196996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.215 [2024-07-24 22:29:51.197006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.215 [2024-07-24 22:29:51.197014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.215 [2024-07-24 22:29:51.197169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.216 [2024-07-24 22:29:51.197258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.216 [2024-07-24 22:29:51.197266] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.216 [2024-07-24 22:29:51.197273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.216 [2024-07-24 22:29:51.199144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.216 [2024-07-24 22:29:51.207968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.216 [2024-07-24 22:29:51.208502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.208911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.208923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.216 [2024-07-24 22:29:51.208930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.216 [2024-07-24 22:29:51.209097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.216 [2024-07-24 22:29:51.209209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.216 [2024-07-24 22:29:51.209217] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.216 [2024-07-24 22:29:51.209224] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.216 [2024-07-24 22:29:51.211184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.216 [2024-07-24 22:29:51.220136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.216 [2024-07-24 22:29:51.220715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.221235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.221267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.216 [2024-07-24 22:29:51.221289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.216 [2024-07-24 22:29:51.221778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.216 [2024-07-24 22:29:51.222021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.216 [2024-07-24 22:29:51.222032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.216 [2024-07-24 22:29:51.222040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.216 [2024-07-24 22:29:51.224709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.216 [2024-07-24 22:29:51.232411] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.216 [2024-07-24 22:29:51.233062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.233459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.233489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.216 [2024-07-24 22:29:51.233510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.216 [2024-07-24 22:29:51.233725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.216 [2024-07-24 22:29:51.233871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.216 [2024-07-24 22:29:51.233878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.216 [2024-07-24 22:29:51.233885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.216 [2024-07-24 22:29:51.235685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.216 [2024-07-24 22:29:51.244419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.216 [2024-07-24 22:29:51.245047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.245519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.245550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.216 [2024-07-24 22:29:51.245572] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.216 [2024-07-24 22:29:51.246008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.216 [2024-07-24 22:29:51.246129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.216 [2024-07-24 22:29:51.246137] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.216 [2024-07-24 22:29:51.246143] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.216 [2024-07-24 22:29:51.248002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.216 [2024-07-24 22:29:51.256197] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.216 [2024-07-24 22:29:51.256864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.257374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.257406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.216 [2024-07-24 22:29:51.257427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.216 [2024-07-24 22:29:51.257857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.216 [2024-07-24 22:29:51.258110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.216 [2024-07-24 22:29:51.258119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.216 [2024-07-24 22:29:51.258125] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.216 [2024-07-24 22:29:51.259892] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.216 [2024-07-24 22:29:51.267966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.216 [2024-07-24 22:29:51.268592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.269100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.269132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.216 [2024-07-24 22:29:51.269154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.216 [2024-07-24 22:29:51.269482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.216 [2024-07-24 22:29:51.269609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.216 [2024-07-24 22:29:51.269617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.216 [2024-07-24 22:29:51.269622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.216 [2024-07-24 22:29:51.271250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.216 [2024-07-24 22:29:51.279798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.216 [2024-07-24 22:29:51.280141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.280544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.216 [2024-07-24 22:29:51.280575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.216 [2024-07-24 22:29:51.280596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.216 [2024-07-24 22:29:51.280926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.216 [2024-07-24 22:29:51.281244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.216 [2024-07-24 22:29:51.281252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.216 [2024-07-24 22:29:51.281259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.216 [2024-07-24 22:29:51.282845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.216 [2024-07-24 22:29:51.291801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.216 [2024-07-24 22:29:51.292422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.217 [2024-07-24 22:29:51.292958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.217 [2024-07-24 22:29:51.292988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.217 [2024-07-24 22:29:51.293009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.217 [2024-07-24 22:29:51.293280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.217 [2024-07-24 22:29:51.293380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.217 [2024-07-24 22:29:51.293390] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.217 [2024-07-24 22:29:51.293397] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.217 [2024-07-24 22:29:51.295274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.217 [2024-07-24 22:29:51.303796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.217 [2024-07-24 22:29:51.304450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.217 [2024-07-24 22:29:51.304976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.217 [2024-07-24 22:29:51.305006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.217 [2024-07-24 22:29:51.305027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.217 [2024-07-24 22:29:51.305373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.217 [2024-07-24 22:29:51.305534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.217 [2024-07-24 22:29:51.305541] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.217 [2024-07-24 22:29:51.305548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.217 [2024-07-24 22:29:51.307444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.217 [2024-07-24 22:29:51.315487] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.217 [2024-07-24 22:29:51.316028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.217 [2024-07-24 22:29:51.316568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.217 [2024-07-24 22:29:51.316599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.217 [2024-07-24 22:29:51.316621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.217 [2024-07-24 22:29:51.316949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.217 [2024-07-24 22:29:51.317258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.217 [2024-07-24 22:29:51.317267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.217 [2024-07-24 22:29:51.317273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.217 [2024-07-24 22:29:51.318971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.217 [2024-07-24 22:29:51.327132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.217 [2024-07-24 22:29:51.327797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.217 [2024-07-24 22:29:51.328301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.217 [2024-07-24 22:29:51.328351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.217 [2024-07-24 22:29:51.328358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.217 [2024-07-24 22:29:51.328472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.217 [2024-07-24 22:29:51.328586] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.217 [2024-07-24 22:29:51.328593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.217 [2024-07-24 22:29:51.328603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.217 [2024-07-24 22:29:51.330415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.477 [2024-07-24 22:29:51.339062] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.477 [2024-07-24 22:29:51.339697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.477 [2024-07-24 22:29:51.340221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.477 [2024-07-24 22:29:51.340253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.477 [2024-07-24 22:29:51.340274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.477 [2024-07-24 22:29:51.340605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.477 [2024-07-24 22:29:51.341037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.477 [2024-07-24 22:29:51.341047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.477 [2024-07-24 22:29:51.341053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.477 [2024-07-24 22:29:51.342669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.477 [2024-07-24 22:29:51.350983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.477 [2024-07-24 22:29:51.351928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.477 [2024-07-24 22:29:51.352374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.477 [2024-07-24 22:29:51.352388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.477 [2024-07-24 22:29:51.352396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.477 [2024-07-24 22:29:51.352518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.477 [2024-07-24 22:29:51.352619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.477 [2024-07-24 22:29:51.352629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.477 [2024-07-24 22:29:51.352636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.477 [2024-07-24 22:29:51.354567] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.477 [2024-07-24 22:29:51.362928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.477 [2024-07-24 22:29:51.363551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.477 [2024-07-24 22:29:51.364028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.477 [2024-07-24 22:29:51.364073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.477 [2024-07-24 22:29:51.364096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.477 [2024-07-24 22:29:51.364427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.477 [2024-07-24 22:29:51.364668] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.477 [2024-07-24 22:29:51.364676] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.477 [2024-07-24 22:29:51.364683] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.366500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.374988] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.375582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.376032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.376076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.478 [2024-07-24 22:29:51.376099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.478 [2024-07-24 22:29:51.376479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.478 [2024-07-24 22:29:51.376762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.478 [2024-07-24 22:29:51.376770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.478 [2024-07-24 22:29:51.376776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.378518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.387061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.387619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.387958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.387988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.478 [2024-07-24 22:29:51.388010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.478 [2024-07-24 22:29:51.388254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.478 [2024-07-24 22:29:51.388686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.478 [2024-07-24 22:29:51.388710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.478 [2024-07-24 22:29:51.388730] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.390624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.398979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.399631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.400166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.400198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.478 [2024-07-24 22:29:51.400220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.478 [2024-07-24 22:29:51.400442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.478 [2024-07-24 22:29:51.400541] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.478 [2024-07-24 22:29:51.400549] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.478 [2024-07-24 22:29:51.400555] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.402231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.410871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.411411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.411916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.411948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.478 [2024-07-24 22:29:51.411969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.478 [2024-07-24 22:29:51.412279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.478 [2024-07-24 22:29:51.412380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.478 [2024-07-24 22:29:51.412388] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.478 [2024-07-24 22:29:51.412394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.414211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.422820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.423308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.423812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.423843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.478 [2024-07-24 22:29:51.423865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.478 [2024-07-24 22:29:51.424412] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.478 [2024-07-24 22:29:51.424659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.478 [2024-07-24 22:29:51.424667] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.478 [2024-07-24 22:29:51.424674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.426392] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.434698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.435327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.435852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.435884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.478 [2024-07-24 22:29:51.435905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.478 [2024-07-24 22:29:51.436193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.478 [2024-07-24 22:29:51.436724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.478 [2024-07-24 22:29:51.436747] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.478 [2024-07-24 22:29:51.436774] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.438420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.446559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.447142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.447679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.447710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.478 [2024-07-24 22:29:51.447732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.478 [2024-07-24 22:29:51.448224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.478 [2024-07-24 22:29:51.448557] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.478 [2024-07-24 22:29:51.448581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.478 [2024-07-24 22:29:51.448602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.450488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.458362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.459027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.459560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.459591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.478 [2024-07-24 22:29:51.459612] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.478 [2024-07-24 22:29:51.459990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.478 [2024-07-24 22:29:51.460279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.478 [2024-07-24 22:29:51.460288] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.478 [2024-07-24 22:29:51.460294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.462058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.470158] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.470793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.471320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.478 [2024-07-24 22:29:51.471352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.478 [2024-07-24 22:29:51.471373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.478 [2024-07-24 22:29:51.471752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.478 [2024-07-24 22:29:51.472012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.478 [2024-07-24 22:29:51.472020] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.478 [2024-07-24 22:29:51.472026] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.478 [2024-07-24 22:29:51.473790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.478 [2024-07-24 22:29:51.481942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.478 [2024-07-24 22:29:51.482567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.483090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.483128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.479 [2024-07-24 22:29:51.483150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.479 [2024-07-24 22:29:51.483430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.479 [2024-07-24 22:29:51.483725] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.479 [2024-07-24 22:29:51.483732] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.479 [2024-07-24 22:29:51.483738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.479 [2024-07-24 22:29:51.486182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.479 [2024-07-24 22:29:51.494683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.479 [2024-07-24 22:29:51.495269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.495805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.495836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.479 [2024-07-24 22:29:51.495858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.479 [2024-07-24 22:29:51.496205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.479 [2024-07-24 22:29:51.496638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.479 [2024-07-24 22:29:51.496663] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.479 [2024-07-24 22:29:51.496683] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.479 [2024-07-24 22:29:51.498674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.479 [2024-07-24 22:29:51.506594] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.479 [2024-07-24 22:29:51.507241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.507722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.507753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.479 [2024-07-24 22:29:51.507774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.479 [2024-07-24 22:29:51.508180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.479 [2024-07-24 22:29:51.508309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.479 [2024-07-24 22:29:51.508317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.479 [2024-07-24 22:29:51.508323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.479 [2024-07-24 22:29:51.510148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.479 [2024-07-24 22:29:51.518401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.479 [2024-07-24 22:29:51.518980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.519446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.519478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.479 [2024-07-24 22:29:51.519508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.479 [2024-07-24 22:29:51.519787] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.479 [2024-07-24 22:29:51.520012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.479 [2024-07-24 22:29:51.520020] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.479 [2024-07-24 22:29:51.520026] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.479 [2024-07-24 22:29:51.521716] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.479 [2024-07-24 22:29:51.530211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.479 [2024-07-24 22:29:51.530873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.531383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.531416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.479 [2024-07-24 22:29:51.531437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.479 [2024-07-24 22:29:51.531865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.479 [2024-07-24 22:29:51.532032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.479 [2024-07-24 22:29:51.532039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.479 [2024-07-24 22:29:51.532051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.479 [2024-07-24 22:29:51.533834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.479 [2024-07-24 22:29:51.541822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.479 [2024-07-24 22:29:51.542425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.542929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.542960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.479 [2024-07-24 22:29:51.542982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.479 [2024-07-24 22:29:51.543325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.479 [2024-07-24 22:29:51.543807] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.479 [2024-07-24 22:29:51.543831] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.479 [2024-07-24 22:29:51.543851] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.479 [2024-07-24 22:29:51.545642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.479 [2024-07-24 22:29:51.553580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.479 [2024-07-24 22:29:51.554223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.554707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.554738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.479 [2024-07-24 22:29:51.554759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.479 [2024-07-24 22:29:51.555160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.479 [2024-07-24 22:29:51.555244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.479 [2024-07-24 22:29:51.555252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.479 [2024-07-24 22:29:51.555258] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.479 [2024-07-24 22:29:51.557004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.479 [2024-07-24 22:29:51.565404] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.479 [2024-07-24 22:29:51.566013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.479 [2024-07-24 22:29:51.566472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.480 [2024-07-24 22:29:51.566505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.480 [2024-07-24 22:29:51.566526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.480 [2024-07-24 22:29:51.566757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.480 [2024-07-24 22:29:51.567146] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.480 [2024-07-24 22:29:51.567154] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.480 [2024-07-24 22:29:51.567160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.480 [2024-07-24 22:29:51.568869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.480 [2024-07-24 22:29:51.577301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.480 [2024-07-24 22:29:51.577913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.480 [2024-07-24 22:29:51.578441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.480 [2024-07-24 22:29:51.578472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.480 [2024-07-24 22:29:51.578494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.480 [2024-07-24 22:29:51.578926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.480 [2024-07-24 22:29:51.579072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.480 [2024-07-24 22:29:51.579081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.480 [2024-07-24 22:29:51.579086] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.480 [2024-07-24 22:29:51.580880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.480 [2024-07-24 22:29:51.589187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.480 [2024-07-24 22:29:51.589815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.480 [2024-07-24 22:29:51.590335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.480 [2024-07-24 22:29:51.590367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.480 [2024-07-24 22:29:51.590390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.480 [2024-07-24 22:29:51.590818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.480 [2024-07-24 22:29:51.591074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.480 [2024-07-24 22:29:51.591082] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.480 [2024-07-24 22:29:51.591089] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.480 [2024-07-24 22:29:51.592884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.480 [2024-07-24 22:29:51.601023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.480 [2024-07-24 22:29:51.601693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.480 [2024-07-24 22:29:51.602216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.480 [2024-07-24 22:29:51.602255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.480 [2024-07-24 22:29:51.602262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.480 [2024-07-24 22:29:51.602362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.480 [2024-07-24 22:29:51.602475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.480 [2024-07-24 22:29:51.602482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.480 [2024-07-24 22:29:51.602488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.480 [2024-07-24 22:29:51.604273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.740 [2024-07-24 22:29:51.612909] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.740 [2024-07-24 22:29:51.613575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.740 [2024-07-24 22:29:51.613832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.740 [2024-07-24 22:29:51.613863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.740 [2024-07-24 22:29:51.613883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.740 [2024-07-24 22:29:51.614089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.740 [2024-07-24 22:29:51.614202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.740 [2024-07-24 22:29:51.614210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.740 [2024-07-24 22:29:51.614216] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.740 [2024-07-24 22:29:51.616003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.740 [2024-07-24 22:29:51.624761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.740 [2024-07-24 22:29:51.625429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.740 [2024-07-24 22:29:51.625907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.740 [2024-07-24 22:29:51.625938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.740 [2024-07-24 22:29:51.625960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.740 [2024-07-24 22:29:51.626252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.740 [2024-07-24 22:29:51.626534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.741 [2024-07-24 22:29:51.626565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.741 [2024-07-24 22:29:51.626586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.741 [2024-07-24 22:29:51.628308] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.741 [2024-07-24 22:29:51.636572] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.741 [2024-07-24 22:29:51.637101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.637615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.637646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.741 [2024-07-24 22:29:51.637667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.741 [2024-07-24 22:29:51.638007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.741 [2024-07-24 22:29:51.638153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.741 [2024-07-24 22:29:51.638162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.741 [2024-07-24 22:29:51.638168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.741 [2024-07-24 22:29:51.639687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.741 [2024-07-24 22:29:51.648531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.741 [2024-07-24 22:29:51.649134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.649663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.649694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.741 [2024-07-24 22:29:51.649716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.741 [2024-07-24 22:29:51.650176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.741 [2024-07-24 22:29:51.650320] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.741 [2024-07-24 22:29:51.650327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.741 [2024-07-24 22:29:51.650333] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.741 [2024-07-24 22:29:51.652115] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.741 [2024-07-24 22:29:51.660288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.741 [2024-07-24 22:29:51.660939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.661385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.661419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.741 [2024-07-24 22:29:51.661440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.741 [2024-07-24 22:29:51.661870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.741 [2024-07-24 22:29:51.662213] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.741 [2024-07-24 22:29:51.662238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.741 [2024-07-24 22:29:51.662265] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.741 [2024-07-24 22:29:51.664526] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.741 [2024-07-24 22:29:51.672242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.741 [2024-07-24 22:29:51.672805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.673327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.673359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.741 [2024-07-24 22:29:51.673380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.741 [2024-07-24 22:29:51.673626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.741 [2024-07-24 22:29:51.673755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.741 [2024-07-24 22:29:51.673762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.741 [2024-07-24 22:29:51.673768] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.741 [2024-07-24 22:29:51.675474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.741 [2024-07-24 22:29:51.684123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.741 [2024-07-24 22:29:51.684519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.684924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.684955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.741 [2024-07-24 22:29:51.684976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.741 [2024-07-24 22:29:51.685438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.741 [2024-07-24 22:29:51.685523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.741 [2024-07-24 22:29:51.685530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.741 [2024-07-24 22:29:51.685536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.741 [2024-07-24 22:29:51.687367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.741 [2024-07-24 22:29:51.695983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.741 [2024-07-24 22:29:51.696587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.696946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.696977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.741 [2024-07-24 22:29:51.696998] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.741 [2024-07-24 22:29:51.697440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.741 [2024-07-24 22:29:51.697555] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.741 [2024-07-24 22:29:51.697562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.741 [2024-07-24 22:29:51.697568] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.741 [2024-07-24 22:29:51.699302] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.741 [2024-07-24 22:29:51.707820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.741 [2024-07-24 22:29:51.708461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.708990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.709021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.741 [2024-07-24 22:29:51.709055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.741 [2024-07-24 22:29:51.709336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.741 [2024-07-24 22:29:51.709472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.741 [2024-07-24 22:29:51.709479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.741 [2024-07-24 22:29:51.709485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.741 [2024-07-24 22:29:51.711146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.741 [2024-07-24 22:29:51.719697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.741 [2024-07-24 22:29:51.720333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.720862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.720892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.741 [2024-07-24 22:29:51.720914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.741 [2024-07-24 22:29:51.721258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.741 [2024-07-24 22:29:51.721402] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.741 [2024-07-24 22:29:51.721409] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.741 [2024-07-24 22:29:51.721415] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.741 [2024-07-24 22:29:51.723193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.741 [2024-07-24 22:29:51.731527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.741 [2024-07-24 22:29:51.732191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.732692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.741 [2024-07-24 22:29:51.732723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.741 [2024-07-24 22:29:51.732745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.741 [2024-07-24 22:29:51.733137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.741 [2024-07-24 22:29:51.733328] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.733336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.733342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.742 [2024-07-24 22:29:51.735208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.742 [2024-07-24 22:29:51.743470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.742 [2024-07-24 22:29:51.744073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.744636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.744668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.742 [2024-07-24 22:29:51.744689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.742 [2024-07-24 22:29:51.744968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.742 [2024-07-24 22:29:51.745364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.745389] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.745409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.742 [2024-07-24 22:29:51.747367] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.742 [2024-07-24 22:29:51.755279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.742 [2024-07-24 22:29:51.755867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.756394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.756428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.742 [2024-07-24 22:29:51.756449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.742 [2024-07-24 22:29:51.756929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.742 [2024-07-24 22:29:51.757420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.757444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.757464] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.742 [2024-07-24 22:29:51.759871] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.742 [2024-07-24 22:29:51.767893] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.742 [2024-07-24 22:29:51.768523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.769056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.769088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.742 [2024-07-24 22:29:51.769109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.742 [2024-07-24 22:29:51.769389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.742 [2024-07-24 22:29:51.769590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.769598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.769604] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.742 [2024-07-24 22:29:51.771352] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.742 [2024-07-24 22:29:51.779703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.742 [2024-07-24 22:29:51.780305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.780798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.780830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.742 [2024-07-24 22:29:51.780851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.742 [2024-07-24 22:29:51.781294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.742 [2024-07-24 22:29:51.781437] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.781444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.781450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.742 [2024-07-24 22:29:51.783128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.742 [2024-07-24 22:29:51.791425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.742 [2024-07-24 22:29:51.792020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.792562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.792594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.742 [2024-07-24 22:29:51.792615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.742 [2024-07-24 22:29:51.792994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.742 [2024-07-24 22:29:51.793126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.793134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.793141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.742 [2024-07-24 22:29:51.794878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.742 [2024-07-24 22:29:51.803454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.742 [2024-07-24 22:29:51.804089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.804595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.804625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.742 [2024-07-24 22:29:51.804647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.742 [2024-07-24 22:29:51.805025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.742 [2024-07-24 22:29:51.805519] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.805544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.805564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.742 [2024-07-24 22:29:51.807508] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.742 [2024-07-24 22:29:51.815413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.742 [2024-07-24 22:29:51.816019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.816533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.816573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.742 [2024-07-24 22:29:51.816594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.742 [2024-07-24 22:29:51.817086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.742 [2024-07-24 22:29:51.817303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.817314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.817323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.742 [2024-07-24 22:29:51.819946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.742 [2024-07-24 22:29:51.827809] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.742 [2024-07-24 22:29:51.828483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.829004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.829035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.742 [2024-07-24 22:29:51.829071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.742 [2024-07-24 22:29:51.829453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.742 [2024-07-24 22:29:51.829597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.829605] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.829611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.742 [2024-07-24 22:29:51.831313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.742 [2024-07-24 22:29:51.839536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.742 [2024-07-24 22:29:51.840121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.840646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.742 [2024-07-24 22:29:51.840677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.742 [2024-07-24 22:29:51.840698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.742 [2024-07-24 22:29:51.840954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.742 [2024-07-24 22:29:51.841110] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.742 [2024-07-24 22:29:51.841118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.742 [2024-07-24 22:29:51.841124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.743 [2024-07-24 22:29:51.842932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.743 [2024-07-24 22:29:51.851329] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.743 [2024-07-24 22:29:51.851947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.743 [2024-07-24 22:29:51.852488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.743 [2024-07-24 22:29:51.852521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.743 [2024-07-24 22:29:51.852549] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.743 [2024-07-24 22:29:51.852990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.743 [2024-07-24 22:29:51.853092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.743 [2024-07-24 22:29:51.853100] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.743 [2024-07-24 22:29:51.853106] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.743 [2024-07-24 22:29:51.854733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.743 [2024-07-24 22:29:51.863460] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.743 [2024-07-24 22:29:51.863915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.743 [2024-07-24 22:29:51.864435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.743 [2024-07-24 22:29:51.864467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:56.743 [2024-07-24 22:29:51.864489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:56.743 [2024-07-24 22:29:51.864769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:56.743 [2024-07-24 22:29:51.864876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:56.743 [2024-07-24 22:29:51.864884] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:56.743 [2024-07-24 22:29:51.864890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.743 [2024-07-24 22:29:51.866550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.004 [2024-07-24 22:29:51.875396] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.004 [2024-07-24 22:29:51.876052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.876579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.876610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.004 [2024-07-24 22:29:51.876631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.004 [2024-07-24 22:29:51.876960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.004 [2024-07-24 22:29:51.877093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.004 [2024-07-24 22:29:51.877101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.004 [2024-07-24 22:29:51.877108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.004 [2024-07-24 22:29:51.878868] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.004 [2024-07-24 22:29:51.887363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.004 [2024-07-24 22:29:51.887920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.888452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.888485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.004 [2024-07-24 22:29:51.888506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.004 [2024-07-24 22:29:51.888952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.004 [2024-07-24 22:29:51.889123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.004 [2024-07-24 22:29:51.889135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.004 [2024-07-24 22:29:51.889143] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.004 [2024-07-24 22:29:51.891805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.004 [2024-07-24 22:29:51.899621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.004 [2024-07-24 22:29:51.900236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.900759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.900789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.004 [2024-07-24 22:29:51.900812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.004 [2024-07-24 22:29:51.901159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.004 [2024-07-24 22:29:51.901277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.004 [2024-07-24 22:29:51.901284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.004 [2024-07-24 22:29:51.901291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.004 [2024-07-24 22:29:51.903120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.004 [2024-07-24 22:29:51.911457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.004 [2024-07-24 22:29:51.912021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.912535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.912566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.004 [2024-07-24 22:29:51.912587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.004 [2024-07-24 22:29:51.912917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.004 [2024-07-24 22:29:51.913050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.004 [2024-07-24 22:29:51.913058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.004 [2024-07-24 22:29:51.913065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.004 [2024-07-24 22:29:51.914847] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.004 [2024-07-24 22:29:51.923243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.004 [2024-07-24 22:29:51.923905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.924357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.924368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.004 [2024-07-24 22:29:51.924375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.004 [2024-07-24 22:29:51.924488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.004 [2024-07-24 22:29:51.924574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.004 [2024-07-24 22:29:51.924582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.004 [2024-07-24 22:29:51.924588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.004 [2024-07-24 22:29:51.926303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.004 [2024-07-24 22:29:51.935096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.004 [2024-07-24 22:29:51.935689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.935932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.935941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.004 [2024-07-24 22:29:51.935948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.004 [2024-07-24 22:29:51.936047] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.004 [2024-07-24 22:29:51.936193] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.004 [2024-07-24 22:29:51.936201] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.004 [2024-07-24 22:29:51.936207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.004 [2024-07-24 22:29:51.937973] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.004 [2024-07-24 22:29:51.947003] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.004 [2024-07-24 22:29:51.947651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.948105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.948140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.004 [2024-07-24 22:29:51.948148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.004 [2024-07-24 22:29:51.948217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.004 [2024-07-24 22:29:51.948300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.004 [2024-07-24 22:29:51.948307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.004 [2024-07-24 22:29:51.948314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.004 [2024-07-24 22:29:51.950014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.004 [2024-07-24 22:29:51.958826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.004 [2024-07-24 22:29:51.959440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.959968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.959999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.004 [2024-07-24 22:29:51.960020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.004 [2024-07-24 22:29:51.960309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.004 [2024-07-24 22:29:51.960789] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.004 [2024-07-24 22:29:51.960825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.004 [2024-07-24 22:29:51.960846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.004 [2024-07-24 22:29:51.962784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.004 [2024-07-24 22:29:51.970774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.004 [2024-07-24 22:29:51.971400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.971903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.004 [2024-07-24 22:29:51.971934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.004 [2024-07-24 22:29:51.971955] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.004 [2024-07-24 22:29:51.972300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.004 [2024-07-24 22:29:51.972730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:51.972754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:51.972774] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:51.974628] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:51.982690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.005 [2024-07-24 22:29:51.983284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:51.983807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:51.983838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.005 [2024-07-24 22:29:51.983859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.005 [2024-07-24 22:29:51.983993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.005 [2024-07-24 22:29:51.984108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:51.984116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:51.984122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:51.985936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:51.994575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.005 [2024-07-24 22:29:51.995201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:51.995671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:51.995701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.005 [2024-07-24 22:29:51.995723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.005 [2024-07-24 22:29:51.996066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.005 [2024-07-24 22:29:51.996229] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:51.996236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:51.996245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:51.998083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:52.006449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.005 [2024-07-24 22:29:52.007061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.007592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.007622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.005 [2024-07-24 22:29:52.007644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.005 [2024-07-24 22:29:52.007924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.005 [2024-07-24 22:29:52.008261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:52.008269] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:52.008275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:52.010148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:52.018191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.005 [2024-07-24 22:29:52.018878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.019327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.019360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.005 [2024-07-24 22:29:52.019381] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.005 [2024-07-24 22:29:52.019709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.005 [2024-07-24 22:29:52.020031] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:52.020047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:52.020056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:52.022781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:52.030659] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.005 [2024-07-24 22:29:52.030963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.031450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.031483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.005 [2024-07-24 22:29:52.031505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.005 [2024-07-24 22:29:52.031885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.005 [2024-07-24 22:29:52.032134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:52.032142] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:52.032148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:52.033845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:52.042519] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.005 [2024-07-24 22:29:52.043163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.043689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.043720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.005 [2024-07-24 22:29:52.043741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.005 [2024-07-24 22:29:52.044153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.005 [2024-07-24 22:29:52.044252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:52.044260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:52.044266] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:52.046028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:52.054379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.005 [2024-07-24 22:29:52.055018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.055454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.055486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.005 [2024-07-24 22:29:52.055507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.005 [2024-07-24 22:29:52.055886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.005 [2024-07-24 22:29:52.056064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:52.056073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:52.056078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:52.057832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:52.066145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.005 [2024-07-24 22:29:52.066762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.067249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.067282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.005 [2024-07-24 22:29:52.067304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.005 [2024-07-24 22:29:52.067733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.005 [2024-07-24 22:29:52.068075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:52.068100] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:52.068120] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:52.070067] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:52.078120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.005 [2024-07-24 22:29:52.078725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.079219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.005 [2024-07-24 22:29:52.079230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.005 [2024-07-24 22:29:52.079236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.005 [2024-07-24 22:29:52.079335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.005 [2024-07-24 22:29:52.079462] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.005 [2024-07-24 22:29:52.079469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.005 [2024-07-24 22:29:52.079476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.005 [2024-07-24 22:29:52.081134] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.005 [2024-07-24 22:29:52.090093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.006 [2024-07-24 22:29:52.090663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.006 [2024-07-24 22:29:52.091133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.006 [2024-07-24 22:29:52.091165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.006 [2024-07-24 22:29:52.091187] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.006 [2024-07-24 22:29:52.091310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.006 [2024-07-24 22:29:52.091423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.006 [2024-07-24 22:29:52.091430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.006 [2024-07-24 22:29:52.091436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.006 [2024-07-24 22:29:52.093290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.006 [2024-07-24 22:29:52.101936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.006 [2024-07-24 22:29:52.102577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.006 [2024-07-24 22:29:52.103031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.006 [2024-07-24 22:29:52.103075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.006 [2024-07-24 22:29:52.103097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.006 [2024-07-24 22:29:52.103258] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.006 [2024-07-24 22:29:52.103370] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.006 [2024-07-24 22:29:52.103378] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.006 [2024-07-24 22:29:52.103384] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.006 [2024-07-24 22:29:52.105026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.006 [2024-07-24 22:29:52.113746] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.006 [2024-07-24 22:29:52.114369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.006 [2024-07-24 22:29:52.114819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.006 [2024-07-24 22:29:52.114849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.006 [2024-07-24 22:29:52.114871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.006 [2024-07-24 22:29:52.115274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.006 [2024-07-24 22:29:52.115343] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.006 [2024-07-24 22:29:52.115350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.006 [2024-07-24 22:29:52.115357] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.006 [2024-07-24 22:29:52.117114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.006 [2024-07-24 22:29:52.125453] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.006 [2024-07-24 22:29:52.126072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.006 [2024-07-24 22:29:52.126572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.006 [2024-07-24 22:29:52.126602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.006 [2024-07-24 22:29:52.126622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.006 [2024-07-24 22:29:52.126950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.006 [2024-07-24 22:29:52.127292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.006 [2024-07-24 22:29:52.127317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.006 [2024-07-24 22:29:52.127337] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.006 [2024-07-24 22:29:52.129591] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.267 [2024-07-24 22:29:52.137489] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.267 [2024-07-24 22:29:52.138115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.267 [2024-07-24 22:29:52.138579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.267 [2024-07-24 22:29:52.138610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.267 [2024-07-24 22:29:52.138631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.267 [2024-07-24 22:29:52.138804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.267 [2024-07-24 22:29:52.138918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.267 [2024-07-24 22:29:52.138925] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.267 [2024-07-24 22:29:52.138932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.267 [2024-07-24 22:29:52.140682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.267 [2024-07-24 22:29:52.149335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.267 [2024-07-24 22:29:52.149985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.267 [2024-07-24 22:29:52.150428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.267 [2024-07-24 22:29:52.150442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.267 [2024-07-24 22:29:52.150449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.267 [2024-07-24 22:29:52.150563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.267 [2024-07-24 22:29:52.150661] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.267 [2024-07-24 22:29:52.150668] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.267 [2024-07-24 22:29:52.150675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.267 [2024-07-24 22:29:52.152556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.267 [2024-07-24 22:29:52.161306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.267 [2024-07-24 22:29:52.161898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.267 [2024-07-24 22:29:52.162368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.267 [2024-07-24 22:29:52.162400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.267 [2024-07-24 22:29:52.162421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.267 [2024-07-24 22:29:52.162850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.267 [2024-07-24 22:29:52.163055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.267 [2024-07-24 22:29:52.163063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.267 [2024-07-24 22:29:52.163070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.267 [2024-07-24 22:29:52.165114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.267 [2024-07-24 22:29:52.173243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.267 [2024-07-24 22:29:52.173878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.267 [2024-07-24 22:29:52.174352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.267 [2024-07-24 22:29:52.174384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.267 [2024-07-24 22:29:52.174406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.267 [2024-07-24 22:29:52.174735] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.267 [2024-07-24 22:29:52.175016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.267 [2024-07-24 22:29:52.175039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.268 [2024-07-24 22:29:52.175068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.268 [2024-07-24 22:29:52.177178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.268 [2024-07-24 22:29:52.185091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.268 [2024-07-24 22:29:52.185659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.186116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.186148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.268 [2024-07-24 22:29:52.186177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.268 [2024-07-24 22:29:52.186606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.268 [2024-07-24 22:29:52.186827] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.268 [2024-07-24 22:29:52.186837] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.268 [2024-07-24 22:29:52.186843] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.268 [2024-07-24 22:29:52.188475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.268 [2024-07-24 22:29:52.196789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.268 [2024-07-24 22:29:52.197416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.197939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.197969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.268 [2024-07-24 22:29:52.197991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.268 [2024-07-24 22:29:52.198386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.268 [2024-07-24 22:29:52.198762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.268 [2024-07-24 22:29:52.198770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.268 [2024-07-24 22:29:52.198776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.268 [2024-07-24 22:29:52.200468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.268 [2024-07-24 22:29:52.208544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.268 [2024-07-24 22:29:52.209188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.209710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.209740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.268 [2024-07-24 22:29:52.209762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.268 [2024-07-24 22:29:52.210055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.268 [2024-07-24 22:29:52.210338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.268 [2024-07-24 22:29:52.210361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.268 [2024-07-24 22:29:52.210381] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.268 [2024-07-24 22:29:52.212521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.268 [2024-07-24 22:29:52.220614] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.268 [2024-07-24 22:29:52.221254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.221698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.221708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.268 [2024-07-24 22:29:52.221715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.268 [2024-07-24 22:29:52.221835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.268 [2024-07-24 22:29:52.221967] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.268 [2024-07-24 22:29:52.221975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.268 [2024-07-24 22:29:52.221981] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.268 [2024-07-24 22:29:52.223826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.268 [2024-07-24 22:29:52.232416] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.268 [2024-07-24 22:29:52.233014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.233468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.233478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.268 [2024-07-24 22:29:52.233485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.268 [2024-07-24 22:29:52.233632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.268 [2024-07-24 22:29:52.233763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.268 [2024-07-24 22:29:52.233771] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.268 [2024-07-24 22:29:52.233777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.268 [2024-07-24 22:29:52.235666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.268 [2024-07-24 22:29:52.244314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.268 [2024-07-24 22:29:52.244917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.245372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.245384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.268 [2024-07-24 22:29:52.245391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.268 [2024-07-24 22:29:52.245493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.268 [2024-07-24 22:29:52.245640] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.268 [2024-07-24 22:29:52.245648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.268 [2024-07-24 22:29:52.245656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.268 [2024-07-24 22:29:52.247518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.268 [2024-07-24 22:29:52.256372] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.268 [2024-07-24 22:29:52.256937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.257391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.257402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.268 [2024-07-24 22:29:52.257409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.268 [2024-07-24 22:29:52.257540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.268 [2024-07-24 22:29:52.257690] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.268 [2024-07-24 22:29:52.257698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.268 [2024-07-24 22:29:52.257704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.268 [2024-07-24 22:29:52.259555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.268 [2024-07-24 22:29:52.268398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.268 [2024-07-24 22:29:52.268990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.269460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.269494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.268 [2024-07-24 22:29:52.269516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.268 [2024-07-24 22:29:52.269947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.268 [2024-07-24 22:29:52.270087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.268 [2024-07-24 22:29:52.270095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.268 [2024-07-24 22:29:52.270102] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.268 [2024-07-24 22:29:52.271901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.268 [2024-07-24 22:29:52.280472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.268 [2024-07-24 22:29:52.281111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.281704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.268 [2024-07-24 22:29:52.281734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.269 [2024-07-24 22:29:52.281756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.269 [2024-07-24 22:29:52.282030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.269 [2024-07-24 22:29:52.282188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.269 [2024-07-24 22:29:52.282197] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.269 [2024-07-24 22:29:52.282204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.269 [2024-07-24 22:29:52.283964] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.269 [2024-07-24 22:29:52.292498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.269 [2024-07-24 22:29:52.293102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.293551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.293583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.269 [2024-07-24 22:29:52.293604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.269 [2024-07-24 22:29:52.293983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.269 [2024-07-24 22:29:52.294338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.269 [2024-07-24 22:29:52.294354] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.269 [2024-07-24 22:29:52.294362] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.269 [2024-07-24 22:29:52.297027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.269 [2024-07-24 22:29:52.305005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.269 [2024-07-24 22:29:52.305602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.306133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.306167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.269 [2024-07-24 22:29:52.306188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.269 [2024-07-24 22:29:52.306667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.269 [2024-07-24 22:29:52.306962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.269 [2024-07-24 22:29:52.306969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.269 [2024-07-24 22:29:52.306975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.269 [2024-07-24 22:29:52.308578] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.269 [2024-07-24 22:29:52.316798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.269 [2024-07-24 22:29:52.317398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.317939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.317970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.269 [2024-07-24 22:29:52.317991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.269 [2024-07-24 22:29:52.318379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.269 [2024-07-24 22:29:52.318716] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.269 [2024-07-24 22:29:52.318724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.269 [2024-07-24 22:29:52.318730] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.269 [2024-07-24 22:29:52.320506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.269 [2024-07-24 22:29:52.328696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.269 [2024-07-24 22:29:52.329314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.329795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.329834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.269 [2024-07-24 22:29:52.329842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.269 [2024-07-24 22:29:52.329970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.269 [2024-07-24 22:29:52.330088] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.269 [2024-07-24 22:29:52.330097] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.269 [2024-07-24 22:29:52.330105] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.269 [2024-07-24 22:29:52.331822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.269 [2024-07-24 22:29:52.340617] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.269 [2024-07-24 22:29:52.341239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.341691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.341722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.269 [2024-07-24 22:29:52.341744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.269 [2024-07-24 22:29:52.342111] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.269 [2024-07-24 22:29:52.342224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.269 [2024-07-24 22:29:52.342232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.269 [2024-07-24 22:29:52.342238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.269 [2024-07-24 22:29:52.344012] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.269 [2024-07-24 22:29:52.352695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.269 [2024-07-24 22:29:52.353324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.353781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.353811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.269 [2024-07-24 22:29:52.353833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.269 [2024-07-24 22:29:52.353981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.269 [2024-07-24 22:29:52.354085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.269 [2024-07-24 22:29:52.354093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.269 [2024-07-24 22:29:52.354099] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.269 [2024-07-24 22:29:52.356886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.269 [2024-07-24 22:29:52.365183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.269 [2024-07-24 22:29:52.365837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.366335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.366346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.269 [2024-07-24 22:29:52.366353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.269 [2024-07-24 22:29:52.366454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.269 [2024-07-24 22:29:52.366601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.269 [2024-07-24 22:29:52.366610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.269 [2024-07-24 22:29:52.366616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.269 [2024-07-24 22:29:52.368434] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.269 [2024-07-24 22:29:52.377270] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.269 [2024-07-24 22:29:52.377770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.378178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.269 [2024-07-24 22:29:52.378189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.269 [2024-07-24 22:29:52.378197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.269 [2024-07-24 22:29:52.378360] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.269 [2024-07-24 22:29:52.378460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.269 [2024-07-24 22:29:52.378468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.269 [2024-07-24 22:29:52.378474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.270 [2024-07-24 22:29:52.380351] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.270 [2024-07-24 22:29:52.389386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.270 [2024-07-24 22:29:52.390015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.270 [2024-07-24 22:29:52.390461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.270 [2024-07-24 22:29:52.390494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.270 [2024-07-24 22:29:52.390516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.270 [2024-07-24 22:29:52.390794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.270 [2024-07-24 22:29:52.390957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.270 [2024-07-24 22:29:52.390965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.270 [2024-07-24 22:29:52.390971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.270 [2024-07-24 22:29:52.392791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.530 [2024-07-24 22:29:52.401476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.530 [2024-07-24 22:29:52.402103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.402559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.402590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.530 [2024-07-24 22:29:52.402611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.530 [2024-07-24 22:29:52.402840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.530 [2024-07-24 22:29:52.403119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.530 [2024-07-24 22:29:52.403127] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.530 [2024-07-24 22:29:52.403134] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.530 [2024-07-24 22:29:52.404870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.530 [2024-07-24 22:29:52.413463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.530 [2024-07-24 22:29:52.414064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.414555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.414585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.530 [2024-07-24 22:29:52.414606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.530 [2024-07-24 22:29:52.415096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.530 [2024-07-24 22:29:52.415334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.530 [2024-07-24 22:29:52.415342] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.530 [2024-07-24 22:29:52.415349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.530 [2024-07-24 22:29:52.417727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.530 [2024-07-24 22:29:52.426386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.530 [2024-07-24 22:29:52.426930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.427305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.427317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.530 [2024-07-24 22:29:52.427324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.530 [2024-07-24 22:29:52.427471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.530 [2024-07-24 22:29:52.427602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.530 [2024-07-24 22:29:52.427610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.530 [2024-07-24 22:29:52.427617] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.530 [2024-07-24 22:29:52.429479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.530 [2024-07-24 22:29:52.438359] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.530 [2024-07-24 22:29:52.438844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.439261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.439274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.530 [2024-07-24 22:29:52.439282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.530 [2024-07-24 22:29:52.439369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.530 [2024-07-24 22:29:52.439485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.530 [2024-07-24 22:29:52.439493] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.530 [2024-07-24 22:29:52.439499] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.530 [2024-07-24 22:29:52.441375] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.530 [2024-07-24 22:29:52.450411] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.530 [2024-07-24 22:29:52.451038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.451450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.451460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.530 [2024-07-24 22:29:52.451467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.530 [2024-07-24 22:29:52.451569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.530 [2024-07-24 22:29:52.451701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.530 [2024-07-24 22:29:52.451709] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.530 [2024-07-24 22:29:52.451716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.530 [2024-07-24 22:29:52.453593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.530 [2024-07-24 22:29:52.462298] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.530 [2024-07-24 22:29:52.462924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.463338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.463349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.530 [2024-07-24 22:29:52.463356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.530 [2024-07-24 22:29:52.463427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.530 [2024-07-24 22:29:52.463528] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.530 [2024-07-24 22:29:52.463536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.530 [2024-07-24 22:29:52.463543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.530 [2024-07-24 22:29:52.465342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.530 [2024-07-24 22:29:52.474446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.530 [2024-07-24 22:29:52.474955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.475365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.530 [2024-07-24 22:29:52.475375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.530 [2024-07-24 22:29:52.475383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.530 [2024-07-24 22:29:52.475484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.530 [2024-07-24 22:29:52.475601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.531 [2024-07-24 22:29:52.475608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.531 [2024-07-24 22:29:52.475615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.531 [2024-07-24 22:29:52.477446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.531 [2024-07-24 22:29:52.486590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.531 [2024-07-24 22:29:52.487129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.487534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.487573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.531 [2024-07-24 22:29:52.487594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.531 [2024-07-24 22:29:52.487975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.531 [2024-07-24 22:29:52.488271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.531 [2024-07-24 22:29:52.488280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.531 [2024-07-24 22:29:52.488286] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.531 [2024-07-24 22:29:52.490201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.531 [2024-07-24 22:29:52.498595] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.531 [2024-07-24 22:29:52.499253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.499715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.499747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.531 [2024-07-24 22:29:52.499769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.531 [2024-07-24 22:29:52.500112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.531 [2024-07-24 22:29:52.500269] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.531 [2024-07-24 22:29:52.500276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.531 [2024-07-24 22:29:52.500282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.531 [2024-07-24 22:29:52.501776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.531 [2024-07-24 22:29:52.510457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.531 [2024-07-24 22:29:52.511063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.511518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.511550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.531 [2024-07-24 22:29:52.511571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.531 [2024-07-24 22:29:52.511902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.531 [2024-07-24 22:29:52.512246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.531 [2024-07-24 22:29:52.512254] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.531 [2024-07-24 22:29:52.512260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.531 [2024-07-24 22:29:52.513990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.531 [2024-07-24 22:29:52.522366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.531 [2024-07-24 22:29:52.522949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.523416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.523449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.531 [2024-07-24 22:29:52.523482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.531 [2024-07-24 22:29:52.523714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.531 [2024-07-24 22:29:52.524203] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.531 [2024-07-24 22:29:52.524229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.531 [2024-07-24 22:29:52.524249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.531 [2024-07-24 22:29:52.526062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.531 [2024-07-24 22:29:52.534419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.531 [2024-07-24 22:29:52.535034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.535562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.535593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.531 [2024-07-24 22:29:52.535614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.531 [2024-07-24 22:29:52.536054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.531 [2024-07-24 22:29:52.536435] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.531 [2024-07-24 22:29:52.536460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.531 [2024-07-24 22:29:52.536481] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.531 [2024-07-24 22:29:52.539396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.531 [2024-07-24 22:29:52.547149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.531 [2024-07-24 22:29:52.547765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.548286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.548324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.531 [2024-07-24 22:29:52.548332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.531 [2024-07-24 22:29:52.548430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.531 [2024-07-24 22:29:52.548528] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.531 [2024-07-24 22:29:52.548536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.531 [2024-07-24 22:29:52.548542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.531 [2024-07-24 22:29:52.550089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.531 [2024-07-24 22:29:52.559118] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.531 [2024-07-24 22:29:52.559627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.560088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.560120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.531 [2024-07-24 22:29:52.560141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.531 [2024-07-24 22:29:52.560348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.531 [2024-07-24 22:29:52.560461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.531 [2024-07-24 22:29:52.560468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.531 [2024-07-24 22:29:52.560474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.531 [2024-07-24 22:29:52.562323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.531 [2024-07-24 22:29:52.571057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.531 [2024-07-24 22:29:52.571607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.572073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.572106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.531 [2024-07-24 22:29:52.572127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.531 [2024-07-24 22:29:52.572407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.531 [2024-07-24 22:29:52.572836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.531 [2024-07-24 22:29:52.572861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.531 [2024-07-24 22:29:52.572881] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.531 [2024-07-24 22:29:52.574746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.531 [2024-07-24 22:29:52.583000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.531 [2024-07-24 22:29:52.583591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.584097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.531 [2024-07-24 22:29:52.584108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.531 [2024-07-24 22:29:52.584115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.531 [2024-07-24 22:29:52.584262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.531 [2024-07-24 22:29:52.584378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.532 [2024-07-24 22:29:52.584386] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.532 [2024-07-24 22:29:52.584392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.532 [2024-07-24 22:29:52.586048] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.532 [2024-07-24 22:29:52.594901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.532 [2024-07-24 22:29:52.595519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.596052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.596063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.532 [2024-07-24 22:29:52.596069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.532 [2024-07-24 22:29:52.596201] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.532 [2024-07-24 22:29:52.596321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.532 [2024-07-24 22:29:52.596329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.532 [2024-07-24 22:29:52.596335] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.532 [2024-07-24 22:29:52.598240] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.532 [2024-07-24 22:29:52.606943] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.532 [2024-07-24 22:29:52.607534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.607968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.607978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.532 [2024-07-24 22:29:52.607985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.532 [2024-07-24 22:29:52.608094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.532 [2024-07-24 22:29:52.608240] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.532 [2024-07-24 22:29:52.608247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.532 [2024-07-24 22:29:52.608253] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.532 [2024-07-24 22:29:52.609974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.532 [2024-07-24 22:29:52.618819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.532 [2024-07-24 22:29:52.619574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.620049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.620060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.532 [2024-07-24 22:29:52.620067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.532 [2024-07-24 22:29:52.620168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.532 [2024-07-24 22:29:52.620270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.532 [2024-07-24 22:29:52.620278] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.532 [2024-07-24 22:29:52.620284] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.532 [2024-07-24 22:29:52.622178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.532 [2024-07-24 22:29:52.630577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.532 [2024-07-24 22:29:52.631224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.631670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.631700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.532 [2024-07-24 22:29:52.631722] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.532 [2024-07-24 22:29:52.631915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.532 [2024-07-24 22:29:52.632095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.532 [2024-07-24 22:29:52.632107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.532 [2024-07-24 22:29:52.632113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.532 [2024-07-24 22:29:52.633859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.532 [2024-07-24 22:29:52.642447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.532 [2024-07-24 22:29:52.643104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.643509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.643540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.532 [2024-07-24 22:29:52.643575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.532 [2024-07-24 22:29:52.643674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.532 [2024-07-24 22:29:52.643802] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.532 [2024-07-24 22:29:52.643809] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.532 [2024-07-24 22:29:52.643816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.532 [2024-07-24 22:29:52.645639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.532 [2024-07-24 22:29:52.654312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.532 [2024-07-24 22:29:52.654863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.655315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.532 [2024-07-24 22:29:52.655347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.532 [2024-07-24 22:29:52.655369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.532 [2024-07-24 22:29:52.655608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.532 [2024-07-24 22:29:52.655751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.532 [2024-07-24 22:29:52.655759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.532 [2024-07-24 22:29:52.655765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.532 [2024-07-24 22:29:52.657440] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.794 [2024-07-24 22:29:52.666238] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.794 [2024-07-24 22:29:52.666897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.667398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.667434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.794 [2024-07-24 22:29:52.667456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.794 [2024-07-24 22:29:52.667738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.794 [2024-07-24 22:29:52.667933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.794 [2024-07-24 22:29:52.667941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.794 [2024-07-24 22:29:52.667950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.794 [2024-07-24 22:29:52.669839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.794 [2024-07-24 22:29:52.678304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.794 [2024-07-24 22:29:52.678806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.679248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.679281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.794 [2024-07-24 22:29:52.679303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.794 [2024-07-24 22:29:52.679682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.794 [2024-07-24 22:29:52.680007] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.794 [2024-07-24 22:29:52.680015] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.794 [2024-07-24 22:29:52.680021] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.794 [2024-07-24 22:29:52.681906] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.794 [2024-07-24 22:29:52.690308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.794 [2024-07-24 22:29:52.690843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.691378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.691411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.794 [2024-07-24 22:29:52.691433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.794 [2024-07-24 22:29:52.691862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.794 [2024-07-24 22:29:52.691993] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.794 [2024-07-24 22:29:52.692001] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.794 [2024-07-24 22:29:52.692007] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.794 [2024-07-24 22:29:52.693870] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.794 [2024-07-24 22:29:52.702153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.794 [2024-07-24 22:29:52.702814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.703320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.703352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.794 [2024-07-24 22:29:52.703373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.794 [2024-07-24 22:29:52.703702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.794 [2024-07-24 22:29:52.703930] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.794 [2024-07-24 22:29:52.703938] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.794 [2024-07-24 22:29:52.703944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.794 [2024-07-24 22:29:52.705722] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.794 [2024-07-24 22:29:52.714093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.794 [2024-07-24 22:29:52.714667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.715223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.715256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.794 [2024-07-24 22:29:52.715277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.794 [2024-07-24 22:29:52.715606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.794 [2024-07-24 22:29:52.715874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.794 [2024-07-24 22:29:52.715881] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.794 [2024-07-24 22:29:52.715888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.794 [2024-07-24 22:29:52.717606] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.794 [2024-07-24 22:29:52.725954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.794 [2024-07-24 22:29:52.726577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.727120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.727152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.794 [2024-07-24 22:29:52.727173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.794 [2024-07-24 22:29:52.727502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.794 [2024-07-24 22:29:52.727896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.794 [2024-07-24 22:29:52.727903] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.794 [2024-07-24 22:29:52.727909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.794 [2024-07-24 22:29:52.729582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.794 [2024-07-24 22:29:52.737762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.794 [2024-07-24 22:29:52.738397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.738951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.738981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.794 [2024-07-24 22:29:52.739002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.794 [2024-07-24 22:29:52.739344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.794 [2024-07-24 22:29:52.739581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.794 [2024-07-24 22:29:52.739592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.794 [2024-07-24 22:29:52.739601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.794 [2024-07-24 22:29:52.742139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.794 [2024-07-24 22:29:52.750369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.794 [2024-07-24 22:29:52.751017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.751586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.751617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.794 [2024-07-24 22:29:52.751638] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.794 [2024-07-24 22:29:52.751969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.794 [2024-07-24 22:29:52.752120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.794 [2024-07-24 22:29:52.752129] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.794 [2024-07-24 22:29:52.752136] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.794 [2024-07-24 22:29:52.753834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.794 [2024-07-24 22:29:52.762092] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.794 [2024-07-24 22:29:52.762685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.763234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.794 [2024-07-24 22:29:52.763266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.794 [2024-07-24 22:29:52.763288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.794 [2024-07-24 22:29:52.763617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.794 [2024-07-24 22:29:52.763903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.794 [2024-07-24 22:29:52.763911] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.794 [2024-07-24 22:29:52.763917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.765687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.773915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.774568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.775118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.775150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.795 [2024-07-24 22:29:52.775171] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.795 [2024-07-24 22:29:52.775599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.795 [2024-07-24 22:29:52.776028] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.795 [2024-07-24 22:29:52.776055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.795 [2024-07-24 22:29:52.776062] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.777689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.785710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.786339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.786892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.786924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.795 [2024-07-24 22:29:52.786945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.795 [2024-07-24 22:29:52.787118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.795 [2024-07-24 22:29:52.787191] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.795 [2024-07-24 22:29:52.787199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.795 [2024-07-24 22:29:52.787206] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.788876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.797576] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.798131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.798682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.798712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.795 [2024-07-24 22:29:52.798733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.795 [2024-07-24 22:29:52.799115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.795 [2024-07-24 22:29:52.799244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.795 [2024-07-24 22:29:52.799251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.795 [2024-07-24 22:29:52.799257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.801150] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.809337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.809902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.810360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.810371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.795 [2024-07-24 22:29:52.810377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.795 [2024-07-24 22:29:52.810476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.795 [2024-07-24 22:29:52.810574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.795 [2024-07-24 22:29:52.810582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.795 [2024-07-24 22:29:52.810588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.812364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.821057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.821672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.822201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.822239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.795 [2024-07-24 22:29:52.822261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.795 [2024-07-24 22:29:52.822640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.795 [2024-07-24 22:29:52.823019] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.795 [2024-07-24 22:29:52.823052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.795 [2024-07-24 22:29:52.823073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.824887] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.833014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.833631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.834112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.834143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.795 [2024-07-24 22:29:52.834165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.795 [2024-07-24 22:29:52.834370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.795 [2024-07-24 22:29:52.834513] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.795 [2024-07-24 22:29:52.834520] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.795 [2024-07-24 22:29:52.834526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.836338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.844906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.845537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.846007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.846037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.795 [2024-07-24 22:29:52.846074] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.795 [2024-07-24 22:29:52.846329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.795 [2024-07-24 22:29:52.846443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.795 [2024-07-24 22:29:52.846450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.795 [2024-07-24 22:29:52.846457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.848174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.856862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.857442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.857916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.857947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.795 [2024-07-24 22:29:52.857975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.795 [2024-07-24 22:29:52.858268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.795 [2024-07-24 22:29:52.858407] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.795 [2024-07-24 22:29:52.858415] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.795 [2024-07-24 22:29:52.858421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.860027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.868609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.869184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.869701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.795 [2024-07-24 22:29:52.869732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.795 [2024-07-24 22:29:52.869753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.795 [2024-07-24 22:29:52.870090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.795 [2024-07-24 22:29:52.870252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.795 [2024-07-24 22:29:52.870259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.795 [2024-07-24 22:29:52.870266] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.795 [2024-07-24 22:29:52.872107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.795 [2024-07-24 22:29:52.880456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.795 [2024-07-24 22:29:52.881059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-07-24 22:29:52.881581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-07-24 22:29:52.881612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.796 [2024-07-24 22:29:52.881634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.796 [2024-07-24 22:29:52.882014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.796 [2024-07-24 22:29:52.882488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.796 [2024-07-24 22:29:52.882500] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.796 [2024-07-24 22:29:52.882509] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.796 [2024-07-24 22:29:52.885108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.796 [2024-07-24 22:29:52.892789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.796 [2024-07-24 22:29:52.893404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-07-24 22:29:52.893955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-07-24 22:29:52.893985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.796 [2024-07-24 22:29:52.894007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.796 [2024-07-24 22:29:52.894358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.796 [2024-07-24 22:29:52.894460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.796 [2024-07-24 22:29:52.894468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.796 [2024-07-24 22:29:52.894475] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.796 [2024-07-24 22:29:52.896156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.796 [2024-07-24 22:29:52.904775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.796 [2024-07-24 22:29:52.905387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-07-24 22:29:52.905934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-07-24 22:29:52.905965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.796 [2024-07-24 22:29:52.905986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.796 [2024-07-24 22:29:52.906352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.796 [2024-07-24 22:29:52.906480] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.796 [2024-07-24 22:29:52.906487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.796 [2024-07-24 22:29:52.906494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.796 [2024-07-24 22:29:52.908099] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:57.796 [2024-07-24 22:29:52.916501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:57.796 [2024-07-24 22:29:52.917178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-07-24 22:29:52.917749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.796 [2024-07-24 22:29:52.917779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:57.796 [2024-07-24 22:29:52.917800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:57.796 [2024-07-24 22:29:52.918125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:57.796 [2024-07-24 22:29:52.918243] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:57.796 [2024-07-24 22:29:52.918250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:57.796 [2024-07-24 22:29:52.918257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:57.796 [2024-07-24 22:29:52.919963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.056 [2024-07-24 22:29:52.928549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.056 [2024-07-24 22:29:52.929122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.929632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.929662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.056 [2024-07-24 22:29:52.929683] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.056 [2024-07-24 22:29:52.930123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.056 [2024-07-24 22:29:52.930466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.056 [2024-07-24 22:29:52.930473] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.056 [2024-07-24 22:29:52.930479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.056 [2024-07-24 22:29:52.932229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.056 [2024-07-24 22:29:52.940439] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.056 [2024-07-24 22:29:52.941094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.941603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.941633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.056 [2024-07-24 22:29:52.941654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.056 [2024-07-24 22:29:52.941978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.056 [2024-07-24 22:29:52.942150] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.056 [2024-07-24 22:29:52.942161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.056 [2024-07-24 22:29:52.942170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.056 [2024-07-24 22:29:52.944878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.056 [2024-07-24 22:29:52.952947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.056 [2024-07-24 22:29:52.953561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.954074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.954107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.056 [2024-07-24 22:29:52.954129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.056 [2024-07-24 22:29:52.954525] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.056 [2024-07-24 22:29:52.954652] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.056 [2024-07-24 22:29:52.954660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.056 [2024-07-24 22:29:52.954666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.056 [2024-07-24 22:29:52.956265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.056 [2024-07-24 22:29:52.964819] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.056 [2024-07-24 22:29:52.965351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.965906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.965937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.056 [2024-07-24 22:29:52.965959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.056 [2024-07-24 22:29:52.966502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.056 [2024-07-24 22:29:52.966694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.056 [2024-07-24 22:29:52.966705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.056 [2024-07-24 22:29:52.966711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.056 [2024-07-24 22:29:52.968411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.056 [2024-07-24 22:29:52.976544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.056 [2024-07-24 22:29:52.977124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.977595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.977626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.056 [2024-07-24 22:29:52.977648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.056 [2024-07-24 22:29:52.978139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.056 [2024-07-24 22:29:52.978521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.056 [2024-07-24 22:29:52.978546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.056 [2024-07-24 22:29:52.978566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.056 [2024-07-24 22:29:52.980594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.056 [2024-07-24 22:29:52.988423] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.056 [2024-07-24 22:29:52.988996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.989570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.056 [2024-07-24 22:29:52.989603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.056 [2024-07-24 22:29:52.989624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.056 [2024-07-24 22:29:52.990065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.057 [2024-07-24 22:29:52.990548] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.057 [2024-07-24 22:29:52.990572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.057 [2024-07-24 22:29:52.990601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.057 [2024-07-24 22:29:52.992273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.057 [2024-07-24 22:29:53.000131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.057 [2024-07-24 22:29:53.000776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.001301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.001334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.057 [2024-07-24 22:29:53.001366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.057 [2024-07-24 22:29:53.001480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.057 [2024-07-24 22:29:53.001609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.057 [2024-07-24 22:29:53.001616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.057 [2024-07-24 22:29:53.001625] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.057 [2024-07-24 22:29:53.003313] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.057 [2024-07-24 22:29:53.011958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.057 [2024-07-24 22:29:53.012596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.013132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.013164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.057 [2024-07-24 22:29:53.013186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.057 [2024-07-24 22:29:53.013515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.057 [2024-07-24 22:29:53.013848] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.057 [2024-07-24 22:29:53.013855] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.057 [2024-07-24 22:29:53.013861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.057 [2024-07-24 22:29:53.015620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.057 [2024-07-24 22:29:53.023748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.057 [2024-07-24 22:29:53.024377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.024907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.024937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.057 [2024-07-24 22:29:53.024958] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.057 [2024-07-24 22:29:53.025300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.057 [2024-07-24 22:29:53.025782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.057 [2024-07-24 22:29:53.025813] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.057 [2024-07-24 22:29:53.025820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.057 [2024-07-24 22:29:53.027562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.057 [2024-07-24 22:29:53.035574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.057 [2024-07-24 22:29:53.036207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.036752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.036783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.057 [2024-07-24 22:29:53.036805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.057 [2024-07-24 22:29:53.037146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.057 [2024-07-24 22:29:53.037479] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.057 [2024-07-24 22:29:53.037503] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.057 [2024-07-24 22:29:53.037523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.057 [2024-07-24 22:29:53.039499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.057 [2024-07-24 22:29:53.047402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.057 [2024-07-24 22:29:53.047993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.048557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.048590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.057 [2024-07-24 22:29:53.048611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.057 [2024-07-24 22:29:53.048990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.057 [2024-07-24 22:29:53.049383] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.057 [2024-07-24 22:29:53.049409] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.057 [2024-07-24 22:29:53.049429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.057 [2024-07-24 22:29:53.051358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.057 [2024-07-24 22:29:53.059183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.057 [2024-07-24 22:29:53.059755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.060239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.060273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.057 [2024-07-24 22:29:53.060294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.057 [2024-07-24 22:29:53.060625] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.057 [2024-07-24 22:29:53.061065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.057 [2024-07-24 22:29:53.061093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.057 [2024-07-24 22:29:53.061099] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.057 [2024-07-24 22:29:53.062754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.057 [2024-07-24 22:29:53.071037] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.057 [2024-07-24 22:29:53.071666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.072184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.072217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.057 [2024-07-24 22:29:53.072238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.057 [2024-07-24 22:29:53.072517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.057 [2024-07-24 22:29:53.072747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.057 [2024-07-24 22:29:53.072755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.057 [2024-07-24 22:29:53.072761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.057 [2024-07-24 22:29:53.074603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.057 [2024-07-24 22:29:53.082893] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.057 [2024-07-24 22:29:53.083540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.084097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.084131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.057 [2024-07-24 22:29:53.084152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.057 [2024-07-24 22:29:53.084531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.057 [2024-07-24 22:29:53.085074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.057 [2024-07-24 22:29:53.085099] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.057 [2024-07-24 22:29:53.085119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.057 [2024-07-24 22:29:53.087221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.057 [2024-07-24 22:29:53.094711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.057 [2024-07-24 22:29:53.095320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.057 [2024-07-24 22:29:53.095793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.095826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.058 [2024-07-24 22:29:53.095848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.058 [2024-07-24 22:29:53.096242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.058 [2024-07-24 22:29:53.096588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.058 [2024-07-24 22:29:53.096612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.058 [2024-07-24 22:29:53.096631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.058 [2024-07-24 22:29:53.098479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.058 [2024-07-24 22:29:53.106540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.058 [2024-07-24 22:29:53.107172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.107719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.107751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.058 [2024-07-24 22:29:53.107772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.058 [2024-07-24 22:29:53.108081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.058 [2024-07-24 22:29:53.108224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.058 [2024-07-24 22:29:53.108232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.058 [2024-07-24 22:29:53.108238] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.058 [2024-07-24 22:29:53.109908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.058 [2024-07-24 22:29:53.118366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.058 [2024-07-24 22:29:53.118927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.119468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.119500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.058 [2024-07-24 22:29:53.119522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.058 [2024-07-24 22:29:53.119851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.058 [2024-07-24 22:29:53.119989] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.058 [2024-07-24 22:29:53.119997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.058 [2024-07-24 22:29:53.120003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.058 [2024-07-24 22:29:53.121790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.058 [2024-07-24 22:29:53.130076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.058 [2024-07-24 22:29:53.130673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.131222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.131254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.058 [2024-07-24 22:29:53.131276] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.058 [2024-07-24 22:29:53.131655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.058 [2024-07-24 22:29:53.132088] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.058 [2024-07-24 22:29:53.132097] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.058 [2024-07-24 22:29:53.132103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.058 [2024-07-24 22:29:53.133894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.058 [2024-07-24 22:29:53.141882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.058 [2024-07-24 22:29:53.142442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.142994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.143025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.058 [2024-07-24 22:29:53.143058] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.058 [2024-07-24 22:29:53.143438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.058 [2024-07-24 22:29:53.143868] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.058 [2024-07-24 22:29:53.143892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.058 [2024-07-24 22:29:53.143923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.058 [2024-07-24 22:29:53.146525] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.058 [2024-07-24 22:29:53.154682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.058 [2024-07-24 22:29:53.155319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.155874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.155905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.058 [2024-07-24 22:29:53.155926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.058 [2024-07-24 22:29:53.156268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.058 [2024-07-24 22:29:53.156651] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.058 [2024-07-24 22:29:53.156675] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.058 [2024-07-24 22:29:53.156694] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.058 [2024-07-24 22:29:53.158661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.058 [2024-07-24 22:29:53.166658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.058 [2024-07-24 22:29:53.167323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.167811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.167841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.058 [2024-07-24 22:29:53.167863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.058 [2024-07-24 22:29:53.168028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.058 [2024-07-24 22:29:53.168182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.058 [2024-07-24 22:29:53.168192] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.058 [2024-07-24 22:29:53.168199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.058 [2024-07-24 22:29:53.170047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.058 [2024-07-24 22:29:53.178660] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.058 [2024-07-24 22:29:53.179229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.179714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.058 [2024-07-24 22:29:53.179744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.058 [2024-07-24 22:29:53.179766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.058 [2024-07-24 22:29:53.180129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.058 [2024-07-24 22:29:53.180242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.058 [2024-07-24 22:29:53.180249] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.058 [2024-07-24 22:29:53.180256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.058 [2024-07-24 22:29:53.182134] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.324 [2024-07-24 22:29:53.190652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.324 [2024-07-24 22:29:53.191258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.324 [2024-07-24 22:29:53.191774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.324 [2024-07-24 22:29:53.191806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.324 [2024-07-24 22:29:53.191834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.324 [2024-07-24 22:29:53.192128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.324 [2024-07-24 22:29:53.192300] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.324 [2024-07-24 22:29:53.192308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.324 [2024-07-24 22:29:53.192314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.324 [2024-07-24 22:29:53.194075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.324 [2024-07-24 22:29:53.202455] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.324 [2024-07-24 22:29:53.203097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.324 [2024-07-24 22:29:53.203649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.324 [2024-07-24 22:29:53.203680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.324 [2024-07-24 22:29:53.203702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.324 [2024-07-24 22:29:53.203966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.324 [2024-07-24 22:29:53.204085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.324 [2024-07-24 22:29:53.204093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.324 [2024-07-24 22:29:53.204100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.324 [2024-07-24 22:29:53.205797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.324 [2024-07-24 22:29:53.214252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.324 [2024-07-24 22:29:53.214879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.324 [2024-07-24 22:29:53.215401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.324 [2024-07-24 22:29:53.215415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.324 [2024-07-24 22:29:53.215425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.324 [2024-07-24 22:29:53.215570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.324 [2024-07-24 22:29:53.215714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.324 [2024-07-24 22:29:53.215725] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.324 [2024-07-24 22:29:53.215734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.324 [2024-07-24 22:29:53.218267] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.325 [2024-07-24 22:29:53.226566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.325 [2024-07-24 22:29:53.227168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.227585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.227615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.325 [2024-07-24 22:29:53.227637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.325 [2024-07-24 22:29:53.227974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.325 [2024-07-24 22:29:53.228230] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.325 [2024-07-24 22:29:53.228239] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.325 [2024-07-24 22:29:53.228245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.325 [2024-07-24 22:29:53.230214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.325 [2024-07-24 22:29:53.238440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.325 [2024-07-24 22:29:53.239094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.239610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.239641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.325 [2024-07-24 22:29:53.239662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.325 [2024-07-24 22:29:53.240016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.325 [2024-07-24 22:29:53.240162] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.325 [2024-07-24 22:29:53.240171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.325 [2024-07-24 22:29:53.240177] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.325 [2024-07-24 22:29:53.241819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.325 [2024-07-24 22:29:53.250410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.325 [2024-07-24 22:29:53.251033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.251575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.251606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.325 [2024-07-24 22:29:53.251628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.325 [2024-07-24 22:29:53.251958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.325 [2024-07-24 22:29:53.252298] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.325 [2024-07-24 22:29:53.252323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.325 [2024-07-24 22:29:53.252343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.325 [2024-07-24 22:29:53.254229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.325 [2024-07-24 22:29:53.262279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.325 [2024-07-24 22:29:53.262884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.263337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.263370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.325 [2024-07-24 22:29:53.263391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.325 [2024-07-24 22:29:53.263869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.325 [2024-07-24 22:29:53.264056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.325 [2024-07-24 22:29:53.264065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.325 [2024-07-24 22:29:53.264071] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.325 [2024-07-24 22:29:53.265894] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.325 [2024-07-24 22:29:53.274120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.325 [2024-07-24 22:29:53.274616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.275098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.275131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.325 [2024-07-24 22:29:53.275154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.325 [2024-07-24 22:29:53.275395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.325 [2024-07-24 22:29:53.275538] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.325 [2024-07-24 22:29:53.275546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.325 [2024-07-24 22:29:53.275552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.325 [2024-07-24 22:29:53.277062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.325 [2024-07-24 22:29:53.285936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.325 [2024-07-24 22:29:53.286528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.287040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.287082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.325 [2024-07-24 22:29:53.287103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.325 [2024-07-24 22:29:53.287264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.325 [2024-07-24 22:29:53.287377] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.325 [2024-07-24 22:29:53.287384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.325 [2024-07-24 22:29:53.287391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.325 [2024-07-24 22:29:53.289280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.325 [2024-07-24 22:29:53.297693] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.325 [2024-07-24 22:29:53.298280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.298812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.325 [2024-07-24 22:29:53.298843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.325 [2024-07-24 22:29:53.298865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.325 [2024-07-24 22:29:53.299105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.325 [2024-07-24 22:29:53.299219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.325 [2024-07-24 22:29:53.299229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.299235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.301046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.326 [2024-07-24 22:29:53.309457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.326 [2024-07-24 22:29:53.310086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.310553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.310584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.326 [2024-07-24 22:29:53.310606] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.326 [2024-07-24 22:29:53.311037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.326 [2024-07-24 22:29:53.311434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.326 [2024-07-24 22:29:53.311458] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.311478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.313475] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.326 [2024-07-24 22:29:53.321374] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.326 [2024-07-24 22:29:53.321986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.322523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.322555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.326 [2024-07-24 22:29:53.322577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.326 [2024-07-24 22:29:53.322859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.326 [2024-07-24 22:29:53.323005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.326 [2024-07-24 22:29:53.323013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.323019] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.324820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.326 [2024-07-24 22:29:53.333379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.326 [2024-07-24 22:29:53.334019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.334555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.334587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.326 [2024-07-24 22:29:53.334608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.326 [2024-07-24 22:29:53.335037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.326 [2024-07-24 22:29:53.335433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.326 [2024-07-24 22:29:53.335456] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.335490] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.337249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.326 [2024-07-24 22:29:53.345241] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.326 [2024-07-24 22:29:53.345873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.346394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.346435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.326 [2024-07-24 22:29:53.346445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.326 [2024-07-24 22:29:53.346654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.326 [2024-07-24 22:29:53.346842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.326 [2024-07-24 22:29:53.346853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.346861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.349356] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.326 [2024-07-24 22:29:53.357763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.326 [2024-07-24 22:29:53.358384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.358917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.358947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.326 [2024-07-24 22:29:53.358969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.326 [2024-07-24 22:29:53.359408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.326 [2024-07-24 22:29:53.359588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.326 [2024-07-24 22:29:53.359595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.359601] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.361381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.326 [2024-07-24 22:29:53.369548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.326 [2024-07-24 22:29:53.370178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.370649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.370679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.326 [2024-07-24 22:29:53.370700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.326 [2024-07-24 22:29:53.371193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.326 [2024-07-24 22:29:53.371581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.326 [2024-07-24 22:29:53.371588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.371595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.373343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.326 [2024-07-24 22:29:53.381540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.326 [2024-07-24 22:29:53.382175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.382653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.382664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.326 [2024-07-24 22:29:53.382671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.326 [2024-07-24 22:29:53.382772] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.326 [2024-07-24 22:29:53.382889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.326 [2024-07-24 22:29:53.382897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.382903] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.384584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.326 [2024-07-24 22:29:53.393702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.326 [2024-07-24 22:29:53.394237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.394719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.394749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.326 [2024-07-24 22:29:53.394770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.326 [2024-07-24 22:29:53.395110] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.326 [2024-07-24 22:29:53.395364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.326 [2024-07-24 22:29:53.395372] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.395378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.397022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.326 [2024-07-24 22:29:53.405555] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.326 [2024-07-24 22:29:53.406198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.406721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.326 [2024-07-24 22:29:53.406752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.326 [2024-07-24 22:29:53.406773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.326 [2024-07-24 22:29:53.407077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.326 [2024-07-24 22:29:53.407179] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.326 [2024-07-24 22:29:53.407187] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.326 [2024-07-24 22:29:53.407193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.326 [2024-07-24 22:29:53.409797] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.327 [2024-07-24 22:29:53.418153] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.327 [2024-07-24 22:29:53.418779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.327 [2024-07-24 22:29:53.419281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.327 [2024-07-24 22:29:53.419316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.327 [2024-07-24 22:29:53.419338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.327 [2024-07-24 22:29:53.419670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.327 [2024-07-24 22:29:53.419796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.327 [2024-07-24 22:29:53.419804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.327 [2024-07-24 22:29:53.419810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.327 [2024-07-24 22:29:53.421594] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.327 [2024-07-24 22:29:53.430071] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.327 [2024-07-24 22:29:53.430669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.327 [2024-07-24 22:29:53.431126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.327 [2024-07-24 22:29:53.431159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.327 [2024-07-24 22:29:53.431181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.327 [2024-07-24 22:29:53.431510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.327 [2024-07-24 22:29:53.431939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.327 [2024-07-24 22:29:53.431963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.327 [2024-07-24 22:29:53.431983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.327 [2024-07-24 22:29:53.433782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.327 [2024-07-24 22:29:53.441909] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.327 [2024-07-24 22:29:53.442534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.327 [2024-07-24 22:29:53.443082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.327 [2024-07-24 22:29:53.443115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.327 [2024-07-24 22:29:53.443137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.327 [2024-07-24 22:29:53.443509] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.327 [2024-07-24 22:29:53.443578] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.327 [2024-07-24 22:29:53.443585] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.327 [2024-07-24 22:29:53.443592] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.327 [2024-07-24 22:29:53.445476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.626 [2024-07-24 22:29:53.453754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.626 [2024-07-24 22:29:53.454356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.626 [2024-07-24 22:29:53.454746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.626 [2024-07-24 22:29:53.454756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.626 [2024-07-24 22:29:53.454763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.626 [2024-07-24 22:29:53.454910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.626 [2024-07-24 22:29:53.454996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.626 [2024-07-24 22:29:53.455003] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.626 [2024-07-24 22:29:53.455010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.626 [2024-07-24 22:29:53.456720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.626 [2024-07-24 22:29:53.465726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.626 [2024-07-24 22:29:53.466371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.626 [2024-07-24 22:29:53.466848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.626 [2024-07-24 22:29:53.466858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.626 [2024-07-24 22:29:53.466865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.626 [2024-07-24 22:29:53.466997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.626 [2024-07-24 22:29:53.467147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.626 [2024-07-24 22:29:53.467156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.626 [2024-07-24 22:29:53.467162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.626 [2024-07-24 22:29:53.468869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.626 [2024-07-24 22:29:53.477757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.626 [2024-07-24 22:29:53.478396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.626 [2024-07-24 22:29:53.478938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.626 [2024-07-24 22:29:53.478969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.626 [2024-07-24 22:29:53.478989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.626 [2024-07-24 22:29:53.479382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.626 [2024-07-24 22:29:53.479764] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.626 [2024-07-24 22:29:53.479787] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.626 [2024-07-24 22:29:53.479807] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.626 [2024-07-24 22:29:53.481666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.626 [2024-07-24 22:29:53.489618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.490265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.490798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.490828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.627 [2024-07-24 22:29:53.490850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.627 [2024-07-24 22:29:53.491293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.627 [2024-07-24 22:29:53.491677] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.627 [2024-07-24 22:29:53.491700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.627 [2024-07-24 22:29:53.491720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.627 [2024-07-24 22:29:53.493617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.627 [2024-07-24 22:29:53.501375] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.502041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.502583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.502615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.627 [2024-07-24 22:29:53.502637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.627 [2024-07-24 22:29:53.502790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.627 [2024-07-24 22:29:53.502933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.627 [2024-07-24 22:29:53.502941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.627 [2024-07-24 22:29:53.502947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.627 [2024-07-24 22:29:53.504668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.627 [2024-07-24 22:29:53.513097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.513741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.514270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.514303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.627 [2024-07-24 22:29:53.514325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.627 [2024-07-24 22:29:53.514605] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.627 [2024-07-24 22:29:53.514882] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.627 [2024-07-24 22:29:53.514889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.627 [2024-07-24 22:29:53.514896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.627 [2024-07-24 22:29:53.516569] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.627 [2024-07-24 22:29:53.524794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.525371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.525861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.525892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.627 [2024-07-24 22:29:53.525920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.627 [2024-07-24 22:29:53.526315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.627 [2024-07-24 22:29:53.526697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.627 [2024-07-24 22:29:53.526724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.627 [2024-07-24 22:29:53.526730] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.627 [2024-07-24 22:29:53.528397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.627 [2024-07-24 22:29:53.536597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.537241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.537644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.537675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.627 [2024-07-24 22:29:53.537696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.627 [2024-07-24 22:29:53.537941] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.627 [2024-07-24 22:29:53.538082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.627 [2024-07-24 22:29:53.538091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.627 [2024-07-24 22:29:53.538097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.627 [2024-07-24 22:29:53.539682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.627 [2024-07-24 22:29:53.548447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.549063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.549595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.549626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.627 [2024-07-24 22:29:53.549647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.627 [2024-07-24 22:29:53.550002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.627 [2024-07-24 22:29:53.550089] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.627 [2024-07-24 22:29:53.550098] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.627 [2024-07-24 22:29:53.550104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.627 [2024-07-24 22:29:53.551848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.627 [2024-07-24 22:29:53.560427] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.561297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.561801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.561832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.627 [2024-07-24 22:29:53.561853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.627 [2024-07-24 22:29:53.562016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.627 [2024-07-24 22:29:53.562135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.627 [2024-07-24 22:29:53.562143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.627 [2024-07-24 22:29:53.562150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.627 [2024-07-24 22:29:53.563877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.627 [2024-07-24 22:29:53.572417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.572989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.573361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.573393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.627 [2024-07-24 22:29:53.573414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.627 [2024-07-24 22:29:53.573843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.627 [2024-07-24 22:29:53.574072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.627 [2024-07-24 22:29:53.574081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.627 [2024-07-24 22:29:53.574087] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.627 [2024-07-24 22:29:53.575903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.627 [2024-07-24 22:29:53.584393] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.584937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.585465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.585502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.627 [2024-07-24 22:29:53.585524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.627 [2024-07-24 22:29:53.585855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.627 [2024-07-24 22:29:53.586113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.627 [2024-07-24 22:29:53.586121] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.627 [2024-07-24 22:29:53.586128] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.627 [2024-07-24 22:29:53.587828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.627 [2024-07-24 22:29:53.596232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.627 [2024-07-24 22:29:53.596861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.597318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.627 [2024-07-24 22:29:53.597349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.628 [2024-07-24 22:29:53.597371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.628 [2024-07-24 22:29:53.597799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.628 [2024-07-24 22:29:53.598196] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.628 [2024-07-24 22:29:53.598222] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.628 [2024-07-24 22:29:53.598242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.628 [2024-07-24 22:29:53.599983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.628 [2024-07-24 22:29:53.608124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.628 [2024-07-24 22:29:53.608549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.608993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.609023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.628 [2024-07-24 22:29:53.609069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.628 [2024-07-24 22:29:53.609198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.628 [2024-07-24 22:29:53.609311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.628 [2024-07-24 22:29:53.609318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.628 [2024-07-24 22:29:53.609324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.628 [2024-07-24 22:29:53.612009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.628 [2024-07-24 22:29:53.620537] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.628 [2024-07-24 22:29:53.621073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.621485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.621515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.628 [2024-07-24 22:29:53.621538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.628 [2024-07-24 22:29:53.621868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.628 [2024-07-24 22:29:53.622359] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.628 [2024-07-24 22:29:53.622384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.628 [2024-07-24 22:29:53.622404] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.628 [2024-07-24 22:29:53.624303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.628 [2024-07-24 22:29:53.632362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.628 [2024-07-24 22:29:53.632961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.633490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.633522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.628 [2024-07-24 22:29:53.633543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.628 [2024-07-24 22:29:53.633872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.628 [2024-07-24 22:29:53.634119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.628 [2024-07-24 22:29:53.634131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.628 [2024-07-24 22:29:53.634138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.628 [2024-07-24 22:29:53.635987] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.628 [2024-07-24 22:29:53.644120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.628 [2024-07-24 22:29:53.644729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.645103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.645148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.628 [2024-07-24 22:29:53.645155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.628 [2024-07-24 22:29:53.645239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.628 [2024-07-24 22:29:53.645396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.628 [2024-07-24 22:29:53.645404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.628 [2024-07-24 22:29:53.645411] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.628 [2024-07-24 22:29:53.647213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3737956 Killed "${NVMF_APP[@]}" "$@" 00:30:58.628 22:29:53 -- host/bdevperf.sh@36 -- # tgt_init 00:30:58.628 22:29:53 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:58.628 22:29:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:58.628 22:29:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:58.628 22:29:53 -- common/autotest_common.sh@10 -- # set +x 00:30:58.628 22:29:53 -- nvmf/common.sh@469 -- # nvmfpid=3739347 00:30:58.628 22:29:53 -- nvmf/common.sh@470 -- # waitforlisten 3739347 00:30:58.628 22:29:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:58.628 [2024-07-24 22:29:53.656257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.628 22:29:53 -- common/autotest_common.sh@819 -- # '[' -z 3739347 ']' 00:30:58.628 22:29:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.628 [2024-07-24 22:29:53.656905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 22:29:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:58.628 22:29:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.628 [2024-07-24 22:29:53.657315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.657326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.628 [2024-07-24 22:29:53.657335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.628 [2024-07-24 22:29:53.657482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.628 22:29:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:58.628 [2024-07-24 22:29:53.657600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.628 [2024-07-24 22:29:53.657608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.628 [2024-07-24 22:29:53.657618] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.628 22:29:53 -- common/autotest_common.sh@10 -- # set +x 00:30:58.628 [2024-07-24 22:29:53.659421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.628 [2024-07-24 22:29:53.668277] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.628 [2024-07-24 22:29:53.668902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.669265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.669277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.628 [2024-07-24 22:29:53.669284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.628 [2024-07-24 22:29:53.669357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.628 [2024-07-24 22:29:53.669490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.628 [2024-07-24 22:29:53.669498] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.628 [2024-07-24 22:29:53.669504] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.628 [2024-07-24 22:29:53.671292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.628 [2024-07-24 22:29:53.680251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.628 [2024-07-24 22:29:53.680795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.681442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.681453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.628 [2024-07-24 22:29:53.681460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.628 [2024-07-24 22:29:53.681547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.628 [2024-07-24 22:29:53.681694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.628 [2024-07-24 22:29:53.681702] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.628 [2024-07-24 22:29:53.681709] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.628 [2024-07-24 22:29:53.683470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.628 [2024-07-24 22:29:53.692169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.628 [2024-07-24 22:29:53.692803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.693155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.628 [2024-07-24 22:29:53.693166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.628 [2024-07-24 22:29:53.693173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.628 [2024-07-24 22:29:53.693275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.629 [2024-07-24 22:29:53.693392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.629 [2024-07-24 22:29:53.693399] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.629 [2024-07-24 22:29:53.693406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.629 [2024-07-24 22:29:53.695307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.629 [2024-07-24 22:29:53.699692] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:30:58.629 [2024-07-24 22:29:53.699732] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:58.629 [2024-07-24 22:29:53.704280] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.629 [2024-07-24 22:29:53.704781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.705236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.705247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.629 [2024-07-24 22:29:53.705254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.629 [2024-07-24 22:29:53.705356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.629 [2024-07-24 22:29:53.705473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.629 [2024-07-24 22:29:53.705480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.629 [2024-07-24 22:29:53.705487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.629 [2024-07-24 22:29:53.707361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.629 [2024-07-24 22:29:53.716248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.629 [2024-07-24 22:29:53.716834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.717292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.717304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.629 [2024-07-24 22:29:53.717311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.629 [2024-07-24 22:29:53.717413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.629 [2024-07-24 22:29:53.717546] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.629 [2024-07-24 22:29:53.717554] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.629 [2024-07-24 22:29:53.717560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.629 [2024-07-24 22:29:53.719408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.629 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.629 [2024-07-24 22:29:53.728299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.629 [2024-07-24 22:29:53.728849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.729261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.729272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.629 [2024-07-24 22:29:53.729280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.629 [2024-07-24 22:29:53.729413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.629 [2024-07-24 22:29:53.729540] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.629 [2024-07-24 22:29:53.729551] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.629 [2024-07-24 22:29:53.729557] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.629 [2024-07-24 22:29:53.731206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.629 [2024-07-24 22:29:53.740359] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.629 [2024-07-24 22:29:53.740922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.741377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.741388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.629 [2024-07-24 22:29:53.741395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.629 [2024-07-24 22:29:53.741542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.629 [2024-07-24 22:29:53.741675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.629 [2024-07-24 22:29:53.741682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.629 [2024-07-24 22:29:53.741689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.629 [2024-07-24 22:29:53.743581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.629 [2024-07-24 22:29:53.752418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.629 [2024-07-24 22:29:53.753064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.753499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.629 [2024-07-24 22:29:53.753509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.629 [2024-07-24 22:29:53.753516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.629 [2024-07-24 22:29:53.753618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.629 [2024-07-24 22:29:53.753765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.629 [2024-07-24 22:29:53.753773] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.629 [2024-07-24 22:29:53.753779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.629 [2024-07-24 22:29:53.755689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.890 [2024-07-24 22:29:53.758380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:58.890 [2024-07-24 22:29:53.764441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.890 [2024-07-24 22:29:53.765092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.890 [2024-07-24 22:29:53.765505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.890 [2024-07-24 22:29:53.765515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.890 [2024-07-24 22:29:53.765523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.890 [2024-07-24 22:29:53.765640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.890 [2024-07-24 22:29:53.765742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.890 [2024-07-24 22:29:53.765749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.890 [2024-07-24 22:29:53.765760] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.890 [2024-07-24 22:29:53.767646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.890 [2024-07-24 22:29:53.776318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.891 [2024-07-24 22:29:53.776854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.777271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.777283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.891 [2024-07-24 22:29:53.777291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.891 [2024-07-24 22:29:53.777438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.891 [2024-07-24 22:29:53.777601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.891 [2024-07-24 22:29:53.777609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.891 [2024-07-24 22:29:53.777616] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.891 [2024-07-24 22:29:53.779502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.891 [2024-07-24 22:29:53.788419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.891 [2024-07-24 22:29:53.789067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.789730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.789740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.891 [2024-07-24 22:29:53.789749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.891 [2024-07-24 22:29:53.789834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.891 [2024-07-24 22:29:53.789948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.891 [2024-07-24 22:29:53.789956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.891 [2024-07-24 22:29:53.789964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.891 [2024-07-24 22:29:53.791734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.891 [2024-07-24 22:29:53.798154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:58.891 [2024-07-24 22:29:53.798261] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:58.891 [2024-07-24 22:29:53.798269] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:58.891 [2024-07-24 22:29:53.798275] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:58.891 [2024-07-24 22:29:53.798311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.891 [2024-07-24 22:29:53.798340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.891 [2024-07-24 22:29:53.798341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.891 [2024-07-24 22:29:53.800478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.891 [2024-07-24 22:29:53.801017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.801456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.801472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.891 [2024-07-24 22:29:53.801482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.891 [2024-07-24 22:29:53.801586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.891 [2024-07-24 22:29:53.801720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.891 [2024-07-24 22:29:53.801728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.891 [2024-07-24 22:29:53.801736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.891 [2024-07-24 22:29:53.803438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.891 [2024-07-24 22:29:53.812638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.891 [2024-07-24 22:29:53.813199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.813556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.813567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.891 [2024-07-24 22:29:53.813576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.891 [2024-07-24 22:29:53.813696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.891 [2024-07-24 22:29:53.813814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.891 [2024-07-24 22:29:53.813822] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.891 [2024-07-24 22:29:53.813831] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.891 [2024-07-24 22:29:53.815754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.891 [2024-07-24 22:29:53.824678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.891 [2024-07-24 22:29:53.825262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.825677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.825689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.891 [2024-07-24 22:29:53.825698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.891 [2024-07-24 22:29:53.825817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.891 [2024-07-24 22:29:53.825921] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.891 [2024-07-24 22:29:53.825929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.891 [2024-07-24 22:29:53.825938] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.891 [2024-07-24 22:29:53.827784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.891 [2024-07-24 22:29:53.836651] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.891 [2024-07-24 22:29:53.837256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.837822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.837833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.891 [2024-07-24 22:29:53.837847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.891 [2024-07-24 22:29:53.837965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.891 [2024-07-24 22:29:53.838087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.891 [2024-07-24 22:29:53.838095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.891 [2024-07-24 22:29:53.838103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.891 [2024-07-24 22:29:53.839932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.891 [2024-07-24 22:29:53.848665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.891 [2024-07-24 22:29:53.849302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.849715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.849727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.891 [2024-07-24 22:29:53.849736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.891 [2024-07-24 22:29:53.849869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.891 [2024-07-24 22:29:53.850002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.891 [2024-07-24 22:29:53.850010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.891 [2024-07-24 22:29:53.850018] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.891 [2024-07-24 22:29:53.851910] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.891 [2024-07-24 22:29:53.860741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.891 [2024-07-24 22:29:53.861270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.861768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.861780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.891 [2024-07-24 22:29:53.861788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.891 [2024-07-24 22:29:53.861890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.891 [2024-07-24 22:29:53.862007] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.891 [2024-07-24 22:29:53.862015] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.891 [2024-07-24 22:29:53.862022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.891 [2024-07-24 22:29:53.863703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.891 [2024-07-24 22:29:53.872796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.891 [2024-07-24 22:29:53.873328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.873742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.891 [2024-07-24 22:29:53.873753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.891 [2024-07-24 22:29:53.873760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.891 [2024-07-24 22:29:53.873912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.891 [2024-07-24 22:29:53.874029] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.892 [2024-07-24 22:29:53.874037] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.892 [2024-07-24 22:29:53.874048] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.892 [2024-07-24 22:29:53.875865] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.892 [2024-07-24 22:29:53.884806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.892 [2024-07-24 22:29:53.885406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.885763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.885774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.892 [2024-07-24 22:29:53.885781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.892 [2024-07-24 22:29:53.885894] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.892 [2024-07-24 22:29:53.886036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.892 [2024-07-24 22:29:53.886050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.892 [2024-07-24 22:29:53.886057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.892 [2024-07-24 22:29:53.887855] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.892 [2024-07-24 22:29:53.896828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.892 [2024-07-24 22:29:53.897347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.897781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.897792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.892 [2024-07-24 22:29:53.897800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.892 [2024-07-24 22:29:53.897932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.892 [2024-07-24 22:29:53.898033] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.892 [2024-07-24 22:29:53.898041] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.892 [2024-07-24 22:29:53.898053] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.892 [2024-07-24 22:29:53.900015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.892 [2024-07-24 22:29:53.908932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.892 [2024-07-24 22:29:53.909420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.909830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.909841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.892 [2024-07-24 22:29:53.909847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.892 [2024-07-24 22:29:53.909996] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.892 [2024-07-24 22:29:53.910122] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.892 [2024-07-24 22:29:53.910131] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.892 [2024-07-24 22:29:53.910137] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.892 [2024-07-24 22:29:53.911876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.892 [2024-07-24 22:29:53.920772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.892 [2024-07-24 22:29:53.921409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.921772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.921782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.892 [2024-07-24 22:29:53.921790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.892 [2024-07-24 22:29:53.921948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.892 [2024-07-24 22:29:53.922066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.892 [2024-07-24 22:29:53.922091] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.892 [2024-07-24 22:29:53.922097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.892 [2024-07-24 22:29:53.923953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.892 [2024-07-24 22:29:53.932625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.892 [2024-07-24 22:29:53.933224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.933655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.933666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.892 [2024-07-24 22:29:53.933673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.892 [2024-07-24 22:29:53.933804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.892 [2024-07-24 22:29:53.933936] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.892 [2024-07-24 22:29:53.933944] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.892 [2024-07-24 22:29:53.933950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.892 [2024-07-24 22:29:53.935754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.892 [2024-07-24 22:29:53.944520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.892 [2024-07-24 22:29:53.945157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.945570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.945580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.892 [2024-07-24 22:29:53.945587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.892 [2024-07-24 22:29:53.945673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.892 [2024-07-24 22:29:53.945759] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.892 [2024-07-24 22:29:53.945770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.892 [2024-07-24 22:29:53.945777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.892 [2024-07-24 22:29:53.947682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.892 [2024-07-24 22:29:53.956661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.892 [2024-07-24 22:29:53.957257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.957758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.957768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.892 [2024-07-24 22:29:53.957775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.892 [2024-07-24 22:29:53.957903] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.892 [2024-07-24 22:29:53.958050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.892 [2024-07-24 22:29:53.958058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.892 [2024-07-24 22:29:53.958064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.892 [2024-07-24 22:29:53.960062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.892 [2024-07-24 22:29:53.968752] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.892 [2024-07-24 22:29:53.969264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.969479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.892 [2024-07-24 22:29:53.969489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.893 [2024-07-24 22:29:53.969496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.893 [2024-07-24 22:29:53.969598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.893 [2024-07-24 22:29:53.969730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.893 [2024-07-24 22:29:53.969738] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.893 [2024-07-24 22:29:53.969745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.893 [2024-07-24 22:29:53.971561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.893 [2024-07-24 22:29:53.980760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.893 [2024-07-24 22:29:53.981379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.893 [2024-07-24 22:29:53.982082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.893 [2024-07-24 22:29:53.982093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.893 [2024-07-24 22:29:53.982101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.893 [2024-07-24 22:29:53.982235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.893 [2024-07-24 22:29:53.982381] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.893 [2024-07-24 22:29:53.982389] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.893 [2024-07-24 22:29:53.982399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.893 [2024-07-24 22:29:53.984156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.893 [2024-07-24 22:29:53.992754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.893 [2024-07-24 22:29:53.993304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.893 [2024-07-24 22:29:53.993718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.893 [2024-07-24 22:29:53.993729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.893 [2024-07-24 22:29:53.993736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.893 [2024-07-24 22:29:53.993852] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.893 [2024-07-24 22:29:53.993984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.893 [2024-07-24 22:29:53.993991] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.893 [2024-07-24 22:29:53.993997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.893 [2024-07-24 22:29:53.995933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.893 [2024-07-24 22:29:54.004685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.893 [2024-07-24 22:29:54.005279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.893 [2024-07-24 22:29:54.005757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.893 [2024-07-24 22:29:54.005768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.893 [2024-07-24 22:29:54.005775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.893 [2024-07-24 22:29:54.005908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.893 [2024-07-24 22:29:54.006040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.893 [2024-07-24 22:29:54.006054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.893 [2024-07-24 22:29:54.006060] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.893 [2024-07-24 22:29:54.007902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:58.893 [2024-07-24 22:29:54.016603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:58.893 [2024-07-24 22:29:54.017183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.893 [2024-07-24 22:29:54.017593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.893 [2024-07-24 22:29:54.017604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:58.893 [2024-07-24 22:29:54.017611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:58.893 [2024-07-24 22:29:54.017712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:58.893 [2024-07-24 22:29:54.017828] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:58.893 [2024-07-24 22:29:54.017836] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:58.893 [2024-07-24 22:29:54.017842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:58.893 [2024-07-24 22:29:54.019648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.153 [2024-07-24 22:29:54.028533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.153 [2024-07-24 22:29:54.029067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.153 [2024-07-24 22:29:54.029404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.153 [2024-07-24 22:29:54.029414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.153 [2024-07-24 22:29:54.029421] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.153 [2024-07-24 22:29:54.029553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.153 [2024-07-24 22:29:54.029685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.153 [2024-07-24 22:29:54.029692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.153 [2024-07-24 22:29:54.029699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.153 [2024-07-24 22:29:54.031499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.153 [2024-07-24 22:29:54.040550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.153 [2024-07-24 22:29:54.041130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.153 [2024-07-24 22:29:54.041545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.153 [2024-07-24 22:29:54.041555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.153 [2024-07-24 22:29:54.041562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.153 [2024-07-24 22:29:54.041693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.153 [2024-07-24 22:29:54.041795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.153 [2024-07-24 22:29:54.041802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.153 [2024-07-24 22:29:54.041809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.153 [2024-07-24 22:29:54.043666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.153 [2024-07-24 22:29:54.052593] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.153 [2024-07-24 22:29:54.053177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.153 [2024-07-24 22:29:54.053346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.153 [2024-07-24 22:29:54.053356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.153 [2024-07-24 22:29:54.053363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.153 [2024-07-24 22:29:54.053494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.153 [2024-07-24 22:29:54.053641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.154 [2024-07-24 22:29:54.053649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.154 [2024-07-24 22:29:54.053656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.154 [2024-07-24 22:29:54.055384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.154 [2024-07-24 22:29:54.064513] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.154 [2024-07-24 22:29:54.065011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.065320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.065331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.154 [2024-07-24 22:29:54.065338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.154 [2024-07-24 22:29:54.065440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.154 [2024-07-24 22:29:54.065525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.154 [2024-07-24 22:29:54.065533] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.154 [2024-07-24 22:29:54.065540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.154 [2024-07-24 22:29:54.067506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.154 [2024-07-24 22:29:54.076582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.154 [2024-07-24 22:29:54.077062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.077469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.077480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.154 [2024-07-24 22:29:54.077487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.154 [2024-07-24 22:29:54.077573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.154 [2024-07-24 22:29:54.077720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.154 [2024-07-24 22:29:54.077728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.154 [2024-07-24 22:29:54.077735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.154 [2024-07-24 22:29:54.079489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.154 [2024-07-24 22:29:54.088626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.154 [2024-07-24 22:29:54.089216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.089674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.089684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.154 [2024-07-24 22:29:54.089691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.154 [2024-07-24 22:29:54.089822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.154 [2024-07-24 22:29:54.089970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.154 [2024-07-24 22:29:54.089977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.154 [2024-07-24 22:29:54.089984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.154 [2024-07-24 22:29:54.091818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.154 [2024-07-24 22:29:54.100524] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.154 [2024-07-24 22:29:54.101051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.101416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.101427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.154 [2024-07-24 22:29:54.101434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.154 [2024-07-24 22:29:54.101535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.154 [2024-07-24 22:29:54.101682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.154 [2024-07-24 22:29:54.101690] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.154 [2024-07-24 22:29:54.101697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.154 [2024-07-24 22:29:54.103648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.154 [2024-07-24 22:29:54.112612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.154 [2024-07-24 22:29:54.113209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.113623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.113633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.154 [2024-07-24 22:29:54.113640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.154 [2024-07-24 22:29:54.113726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.154 [2024-07-24 22:29:54.113857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.154 [2024-07-24 22:29:54.113865] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.154 [2024-07-24 22:29:54.113871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.154 [2024-07-24 22:29:54.115789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.154 [2024-07-24 22:29:54.124672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.154 [2024-07-24 22:29:54.125199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.125487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.125498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.154 [2024-07-24 22:29:54.125505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.154 [2024-07-24 22:29:54.125606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.154 [2024-07-24 22:29:54.125722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.154 [2024-07-24 22:29:54.125730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.154 [2024-07-24 22:29:54.125737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.154 [2024-07-24 22:29:54.127506] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.154 [2024-07-24 22:29:54.136723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.154 [2024-07-24 22:29:54.137259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.137672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.137685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.154 [2024-07-24 22:29:54.137692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.154 [2024-07-24 22:29:54.137778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.154 [2024-07-24 22:29:54.137925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.154 [2024-07-24 22:29:54.137932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.154 [2024-07-24 22:29:54.137939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.154 [2024-07-24 22:29:54.139651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.154 [2024-07-24 22:29:54.148498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.154 [2024-07-24 22:29:54.149136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.149545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.154 [2024-07-24 22:29:54.149555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.154 [2024-07-24 22:29:54.149562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.154 [2024-07-24 22:29:54.149678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.149825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.149832] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.155 [2024-07-24 22:29:54.149839] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.155 [2024-07-24 22:29:54.151595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.155 [2024-07-24 22:29:54.160238] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.155 [2024-07-24 22:29:54.160864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.161354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.161365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.155 [2024-07-24 22:29:54.161372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.155 [2024-07-24 22:29:54.161474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.161559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.161567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.155 [2024-07-24 22:29:54.161573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.155 [2024-07-24 22:29:54.163253] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.155 [2024-07-24 22:29:54.172262] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.155 [2024-07-24 22:29:54.172817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.173294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.173305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.155 [2024-07-24 22:29:54.173317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.155 [2024-07-24 22:29:54.173435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.173536] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.173544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.155 [2024-07-24 22:29:54.173550] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.155 [2024-07-24 22:29:54.175183] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.155 [2024-07-24 22:29:54.184178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.155 [2024-07-24 22:29:54.184839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.185324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.185335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.155 [2024-07-24 22:29:54.185341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.155 [2024-07-24 22:29:54.185443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.185559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.185567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.155 [2024-07-24 22:29:54.185573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.155 [2024-07-24 22:29:54.187314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.155 [2024-07-24 22:29:54.196108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.155 [2024-07-24 22:29:54.196750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.197209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.197219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.155 [2024-07-24 22:29:54.197227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.155 [2024-07-24 22:29:54.197313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.197415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.197422] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.155 [2024-07-24 22:29:54.197429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.155 [2024-07-24 22:29:54.199167] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.155 [2024-07-24 22:29:54.208108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.155 [2024-07-24 22:29:54.208772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.209251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.209263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.155 [2024-07-24 22:29:54.209270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.155 [2024-07-24 22:29:54.209392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.209508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.209516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.155 [2024-07-24 22:29:54.209523] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.155 [2024-07-24 22:29:54.211415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.155 [2024-07-24 22:29:54.220112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.155 [2024-07-24 22:29:54.220755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.221039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.221055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.155 [2024-07-24 22:29:54.221062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.155 [2024-07-24 22:29:54.221178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.221310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.221317] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.155 [2024-07-24 22:29:54.221323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.155 [2024-07-24 22:29:54.223197] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.155 [2024-07-24 22:29:54.232252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.155 [2024-07-24 22:29:54.232604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.233067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.233078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.155 [2024-07-24 22:29:54.233085] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.155 [2024-07-24 22:29:54.233218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.233349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.233356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.155 [2024-07-24 22:29:54.233363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.155 [2024-07-24 22:29:54.235103] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.155 [2024-07-24 22:29:54.244070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.155 [2024-07-24 22:29:54.244719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.245200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.245211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.155 [2024-07-24 22:29:54.245218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.155 [2024-07-24 22:29:54.245319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.245484] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.245492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.155 [2024-07-24 22:29:54.245498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.155 [2024-07-24 22:29:54.247342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.155 [2024-07-24 22:29:54.256292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.155 [2024-07-24 22:29:54.256909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.257322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.155 [2024-07-24 22:29:54.257333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.155 [2024-07-24 22:29:54.257340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.155 [2024-07-24 22:29:54.257456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.155 [2024-07-24 22:29:54.257558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.155 [2024-07-24 22:29:54.257565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.156 [2024-07-24 22:29:54.257571] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.156 [2024-07-24 22:29:54.259446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.156 [2024-07-24 22:29:54.268131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.156 [2024-07-24 22:29:54.268624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.156 [2024-07-24 22:29:54.269103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.156 [2024-07-24 22:29:54.269114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.156 [2024-07-24 22:29:54.269120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.156 [2024-07-24 22:29:54.269252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.156 [2024-07-24 22:29:54.269384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.156 [2024-07-24 22:29:54.269391] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.156 [2024-07-24 22:29:54.269398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.156 [2024-07-24 22:29:54.271239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.156 [2024-07-24 22:29:54.280183] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.156 [2024-07-24 22:29:54.280780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.156 [2024-07-24 22:29:54.281257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.156 [2024-07-24 22:29:54.281268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.156 [2024-07-24 22:29:54.281275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.156 [2024-07-24 22:29:54.281391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.156 [2024-07-24 22:29:54.281507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.156 [2024-07-24 22:29:54.281518] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.156 [2024-07-24 22:29:54.281525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.156 [2024-07-24 22:29:54.283323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.416 [2024-07-24 22:29:54.292450] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.416 [2024-07-24 22:29:54.293140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.293596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.293607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.416 [2024-07-24 22:29:54.293614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.416 [2024-07-24 22:29:54.293746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.416 [2024-07-24 22:29:54.293846] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.416 [2024-07-24 22:29:54.293854] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.416 [2024-07-24 22:29:54.293861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.416 [2024-07-24 22:29:54.295750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.416 [2024-07-24 22:29:54.304548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.416 [2024-07-24 22:29:54.305133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.305613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.305623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.416 [2024-07-24 22:29:54.305630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.416 [2024-07-24 22:29:54.305777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.416 [2024-07-24 22:29:54.305878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.416 [2024-07-24 22:29:54.305885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.416 [2024-07-24 22:29:54.305892] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.416 [2024-07-24 22:29:54.307779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.416 [2024-07-24 22:29:54.316478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.416 [2024-07-24 22:29:54.317091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.317574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.317584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.416 [2024-07-24 22:29:54.317591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.416 [2024-07-24 22:29:54.317707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.416 [2024-07-24 22:29:54.317808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.416 [2024-07-24 22:29:54.317816] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.416 [2024-07-24 22:29:54.317826] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.416 [2024-07-24 22:29:54.319383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.416 [2024-07-24 22:29:54.328502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.416 [2024-07-24 22:29:54.329125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.329586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.329596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.416 [2024-07-24 22:29:54.329603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.416 [2024-07-24 22:29:54.329705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.416 [2024-07-24 22:29:54.329821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.416 [2024-07-24 22:29:54.329829] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.416 [2024-07-24 22:29:54.329835] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.416 [2024-07-24 22:29:54.331558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.416 [2024-07-24 22:29:54.340536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.416 [2024-07-24 22:29:54.341179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.341680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.341691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.416 [2024-07-24 22:29:54.341698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.416 [2024-07-24 22:29:54.341799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.416 [2024-07-24 22:29:54.341946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.416 [2024-07-24 22:29:54.341953] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.416 [2024-07-24 22:29:54.341960] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.416 [2024-07-24 22:29:54.343744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.416 [2024-07-24 22:29:54.352467] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.416 [2024-07-24 22:29:54.353048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.353529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.353539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.416 [2024-07-24 22:29:54.353546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.416 [2024-07-24 22:29:54.353632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.416 [2024-07-24 22:29:54.353779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.416 [2024-07-24 22:29:54.353787] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.416 [2024-07-24 22:29:54.353793] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.416 [2024-07-24 22:29:54.355553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.416 [2024-07-24 22:29:54.364421] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.416 [2024-07-24 22:29:54.365053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.365456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.416 [2024-07-24 22:29:54.365466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.416 [2024-07-24 22:29:54.365473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.365621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.365737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.365745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.417 [2024-07-24 22:29:54.365751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.417 [2024-07-24 22:29:54.367538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.417 [2024-07-24 22:29:54.376417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.417 [2024-07-24 22:29:54.377031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.377471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.377481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.417 [2024-07-24 22:29:54.377489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.377590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.377721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.377729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.417 [2024-07-24 22:29:54.377736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.417 [2024-07-24 22:29:54.379401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.417 [2024-07-24 22:29:54.388516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.417 [2024-07-24 22:29:54.389114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.389573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.389583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.417 [2024-07-24 22:29:54.389590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.389691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.389777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.389785] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.417 [2024-07-24 22:29:54.389792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.417 [2024-07-24 22:29:54.391651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.417 [2024-07-24 22:29:54.400638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.417 [2024-07-24 22:29:54.401254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.401711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.401723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.417 [2024-07-24 22:29:54.401730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.401861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.402008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.402016] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.417 [2024-07-24 22:29:54.402022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.417 [2024-07-24 22:29:54.403795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.417 [2024-07-24 22:29:54.412640] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.417 [2024-07-24 22:29:54.413070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.413483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.413494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.417 [2024-07-24 22:29:54.413503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.413634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.413735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.413743] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.417 [2024-07-24 22:29:54.413750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.417 [2024-07-24 22:29:54.415530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.417 [2024-07-24 22:29:54.424652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.417 [2024-07-24 22:29:54.425178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.425638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.425649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.417 [2024-07-24 22:29:54.425656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.425758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.425859] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.425868] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.417 [2024-07-24 22:29:54.425874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.417 [2024-07-24 22:29:54.427659] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.417 [2024-07-24 22:29:54.436637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.417 [2024-07-24 22:29:54.437280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.437741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.437752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.417 [2024-07-24 22:29:54.437759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.437860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.437977] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.437985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.417 [2024-07-24 22:29:54.437992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.417 [2024-07-24 22:29:54.439823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.417 [2024-07-24 22:29:54.448643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.417 [2024-07-24 22:29:54.449213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.449690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.449701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.417 [2024-07-24 22:29:54.449708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.449839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.449956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.449965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.417 [2024-07-24 22:29:54.449971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.417 [2024-07-24 22:29:54.451981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.417 [2024-07-24 22:29:54.460648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.417 [2024-07-24 22:29:54.461262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.461730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.461740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.417 [2024-07-24 22:29:54.461747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.461860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.461944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.461951] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.417 [2024-07-24 22:29:54.461957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.417 [2024-07-24 22:29:54.463756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.417 [2024-07-24 22:29:54.472527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.417 [2024-07-24 22:29:54.473146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.473628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.417 [2024-07-24 22:29:54.473642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.417 [2024-07-24 22:29:54.473649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.417 [2024-07-24 22:29:54.473781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.417 [2024-07-24 22:29:54.473883] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.417 [2024-07-24 22:29:54.473891] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.418 [2024-07-24 22:29:54.473897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.418 [2024-07-24 22:29:54.475837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.418 [2024-07-24 22:29:54.484578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.418 [2024-07-24 22:29:54.485170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.485678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.485689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.418 [2024-07-24 22:29:54.485696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.418 [2024-07-24 22:29:54.485844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.418 [2024-07-24 22:29:54.486006] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.418 [2024-07-24 22:29:54.486014] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.418 [2024-07-24 22:29:54.486020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.418 [2024-07-24 22:29:54.487760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.418 [2024-07-24 22:29:54.496700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.418 [2024-07-24 22:29:54.497278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.497767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.497778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.418 [2024-07-24 22:29:54.497785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.418 [2024-07-24 22:29:54.497901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.418 [2024-07-24 22:29:54.498034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.418 [2024-07-24 22:29:54.498047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.418 [2024-07-24 22:29:54.498054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.418 [2024-07-24 22:29:54.499865] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.418 22:29:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:59.418 22:29:54 -- common/autotest_common.sh@852 -- # return 0 00:30:59.418 22:29:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:59.418 22:29:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:59.418 22:29:54 -- common/autotest_common.sh@10 -- # set +x 00:30:59.418 [2024-07-24 22:29:54.508794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.418 [2024-07-24 22:29:54.509376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.509885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.509895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.418 [2024-07-24 22:29:54.509902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.418 [2024-07-24 22:29:54.510055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.418 [2024-07-24 22:29:54.510157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.418 [2024-07-24 22:29:54.510165] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.418 [2024-07-24 22:29:54.510172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.418 [2024-07-24 22:29:54.511967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.418 [2024-07-24 22:29:54.520893] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.418 [2024-07-24 22:29:54.521498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.521857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.521867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.418 [2024-07-24 22:29:54.521874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.418 [2024-07-24 22:29:54.521975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.418 [2024-07-24 22:29:54.522080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.418 [2024-07-24 22:29:54.522090] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.418 [2024-07-24 22:29:54.522097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.418 [2024-07-24 22:29:54.523774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.418 [2024-07-24 22:29:54.532934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.418 [2024-07-24 22:29:54.533491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.533789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.533799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.418 [2024-07-24 22:29:54.533806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.418 [2024-07-24 22:29:54.533938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.418 [2024-07-24 22:29:54.534075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.418 [2024-07-24 22:29:54.534084] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.418 [2024-07-24 22:29:54.534091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.418 [2024-07-24 22:29:54.535933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.418 22:29:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.418 22:29:54 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.418 22:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:59.418 22:29:54 -- common/autotest_common.sh@10 -- # set +x 00:30:59.418 [2024-07-24 22:29:54.544815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.418 [2024-07-24 22:29:54.545364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.545848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.418 [2024-07-24 22:29:54.545858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.418 [2024-07-24 22:29:54.545865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.418 [2024-07-24 22:29:54.545982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.418 [2024-07-24 22:29:54.546119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.418 [2024-07-24 22:29:54.546128] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.418 [2024-07-24 22:29:54.546135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.677 [2024-07-24 22:29:54.547931] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.677 [2024-07-24 22:29:54.549170] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.677 22:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:59.677 22:29:54 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:59.677 22:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:59.677 22:29:54 -- common/autotest_common.sh@10 -- # set +x 00:30:59.677 [2024-07-24 22:29:54.556958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.677 [2024-07-24 22:29:54.557480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.677 [2024-07-24 22:29:54.558009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.677 [2024-07-24 22:29:54.558020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.677 [2024-07-24 22:29:54.558027] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.677 [2024-07-24 22:29:54.558133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.677 [2024-07-24 22:29:54.558250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.677 [2024-07-24 22:29:54.558258] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.677 [2024-07-24 22:29:54.558264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.677 [2024-07-24 22:29:54.560049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.677 [2024-07-24 22:29:54.568858] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.677 [2024-07-24 22:29:54.569493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.677 [2024-07-24 22:29:54.570003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.677 [2024-07-24 22:29:54.570013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.677 [2024-07-24 22:29:54.570020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.677 [2024-07-24 22:29:54.570186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.677 [2024-07-24 22:29:54.570273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.677 [2024-07-24 22:29:54.570280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.677 [2024-07-24 22:29:54.570291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.677 [2024-07-24 22:29:54.572163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.677 [2024-07-24 22:29:54.580778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.677 [2024-07-24 22:29:54.581345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.677 [2024-07-24 22:29:54.581800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.677 [2024-07-24 22:29:54.581810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.677 [2024-07-24 22:29:54.581817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.677 [2024-07-24 22:29:54.581917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.677 [2024-07-24 22:29:54.582018] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.677 [2024-07-24 22:29:54.582026] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.677 [2024-07-24 22:29:54.582032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.677 [2024-07-24 22:29:54.583785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.677 [2024-07-24 22:29:54.592710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.678 [2024-07-24 22:29:54.593340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.678 [2024-07-24 22:29:54.593811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.678 [2024-07-24 22:29:54.593821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.678 [2024-07-24 22:29:54.593829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.678 [2024-07-24 22:29:54.593994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.678 [2024-07-24 22:29:54.594130] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.678 [2024-07-24 22:29:54.594139] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.678 [2024-07-24 22:29:54.594146] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.678 [2024-07-24 22:29:54.596034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.678 Malloc0 00:30:59.678 22:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:59.678 22:29:54 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:59.678 22:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:59.678 22:29:54 -- common/autotest_common.sh@10 -- # set +x 00:30:59.678 [2024-07-24 22:29:54.604776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.678 [2024-07-24 22:29:54.605425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.678 [2024-07-24 22:29:54.605832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.678 [2024-07-24 22:29:54.605843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.678 [2024-07-24 22:29:54.605851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.678 [2024-07-24 22:29:54.605938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.678 [2024-07-24 22:29:54.606074] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.678 [2024-07-24 22:29:54.606087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.678 [2024-07-24 22:29:54.606094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.678 [2024-07-24 22:29:54.607904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.678 22:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:59.678 22:29:54 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:59.678 22:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:59.678 22:29:54 -- common/autotest_common.sh@10 -- # set +x 00:30:59.678 [2024-07-24 22:29:54.616840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.678 [2024-07-24 22:29:54.617467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.678 [2024-07-24 22:29:54.617926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.678 [2024-07-24 22:29:54.617937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14dc510 with addr=10.0.0.2, port=4420 00:30:59.678 [2024-07-24 22:29:54.617944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dc510 is same with the state(5) to be set 00:30:59.678 [2024-07-24 22:29:54.618065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dc510 (9): Bad file descriptor 00:30:59.678 [2024-07-24 22:29:54.618183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:59.678 [2024-07-24 22:29:54.618191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:59.678 [2024-07-24 22:29:54.618198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:59.678 22:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:59.678 22:29:54 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.678 22:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:59.678 22:29:54 -- common/autotest_common.sh@10 -- # set +x 00:30:59.678 [2024-07-24 22:29:54.619965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:59.678 [2024-07-24 22:29:54.622378] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.678 22:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:59.678 22:29:54 -- host/bdevperf.sh@38 -- # wait 3738401 00:30:59.678 [2024-07-24 22:29:54.628913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:59.678 [2024-07-24 22:29:54.782380] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:09.647 00:31:09.647 Latency(us) 00:31:09.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.647 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:09.647 Verification LBA range: start 0x0 length 0x4000 00:31:09.647 Nvme1n1 : 15.01 12035.18 47.01 18603.44 0.00 4165.76 1168.25 23592.96 00:31:09.647 =================================================================================================================== 00:31:09.647 Total : 12035.18 47.01 18603.44 0.00 4165.76 1168.25 23592.96 00:31:09.647 22:30:03 -- host/bdevperf.sh@39 -- # sync 00:31:09.647 22:30:03 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.647 22:30:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:09.647 22:30:03 -- common/autotest_common.sh@10 -- # set +x 00:31:09.647 22:30:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:09.647 22:30:03 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:31:09.647 22:30:03 -- host/bdevperf.sh@44 -- # nvmftestfini 00:31:09.647 22:30:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:09.647 22:30:03 -- nvmf/common.sh@116 -- # sync 00:31:09.647 22:30:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:09.647 22:30:03 -- nvmf/common.sh@119 -- # set +e 00:31:09.647 22:30:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:09.647 22:30:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:09.647 rmmod nvme_tcp 00:31:09.647 rmmod nvme_fabrics 00:31:09.647 rmmod nvme_keyring 00:31:09.647 22:30:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:09.647 22:30:03 -- nvmf/common.sh@123 -- # set -e 00:31:09.647 22:30:03 -- nvmf/common.sh@124 -- # return 0 00:31:09.647 22:30:03 -- nvmf/common.sh@477 -- # '[' -n 3739347 ']' 00:31:09.647 22:30:03 -- nvmf/common.sh@478 -- # killprocess 3739347 00:31:09.647 22:30:03 -- common/autotest_common.sh@926 -- # '[' -z 3739347 ']' 00:31:09.647 22:30:03 -- common/autotest_common.sh@930 -- # kill -0 3739347 00:31:09.648 22:30:03 -- common/autotest_common.sh@931 -- # uname 00:31:09.648 22:30:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:09.648 22:30:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3739347 00:31:09.648 22:30:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:31:09.648 22:30:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:31:09.648 22:30:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3739347' 00:31:09.648 killing process with pid 3739347 00:31:09.648 22:30:03 -- common/autotest_common.sh@945 -- # kill 3739347 00:31:09.648 22:30:03 -- common/autotest_common.sh@950 -- # wait 3739347 00:31:09.648 22:30:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:09.648 22:30:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:09.648 22:30:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:09.648 22:30:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:09.648 22:30:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:09.648 22:30:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.648 22:30:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:09.648 22:30:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.586 22:30:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:10.586 00:31:10.586 real 0m25.950s 00:31:10.586 user 1m2.140s 00:31:10.586 sys 0m6.376s 00:31:10.586 22:30:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:10.586 22:30:05 -- common/autotest_common.sh@10 -- # set +x 00:31:10.586 ************************************ 00:31:10.586 END TEST nvmf_bdevperf 00:31:10.586 ************************************ 00:31:10.586 22:30:05 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:10.586 22:30:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:10.586 22:30:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:10.586 22:30:05 -- common/autotest_common.sh@10 -- # set +x 00:31:10.586 ************************************ 00:31:10.586 START TEST nvmf_target_disconnect 00:31:10.586 ************************************ 00:31:10.586 22:30:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:31:10.845 * Looking for test storage... 00:31:10.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:10.845 22:30:05 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.845 22:30:05 -- nvmf/common.sh@7 -- # uname -s 00:31:10.845 22:30:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.845 22:30:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.845 22:30:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.845 22:30:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.845 22:30:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.845 22:30:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.845 22:30:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.845 22:30:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.845 22:30:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.845 22:30:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.845 22:30:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:10.845 22:30:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:10.845 22:30:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.845 22:30:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.845 22:30:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.845 22:30:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.845 22:30:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.845 22:30:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.845 22:30:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.845 22:30:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.845 22:30:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.845 22:30:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.845 22:30:05 -- paths/export.sh@5 -- # export PATH 00:31:10.845 22:30:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.845 22:30:05 -- nvmf/common.sh@46 -- # : 0 00:31:10.845 22:30:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:10.845 22:30:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:10.845 22:30:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:10.845 22:30:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.845 22:30:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.845 22:30:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:10.845 22:30:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:10.845 22:30:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:10.845 22:30:05 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:10.845 22:30:05 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:31:10.845 22:30:05 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:31:10.845 22:30:05 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:31:10.845 22:30:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:10.845 22:30:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.845 22:30:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:10.845 22:30:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:10.845 22:30:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:10.845 22:30:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.845 22:30:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:10.845 22:30:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.845 22:30:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:10.845 22:30:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:10.845 22:30:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:10.845 22:30:05 -- common/autotest_common.sh@10 -- # set +x 00:31:16.121 22:30:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:16.121 22:30:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:16.121 22:30:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:16.121 22:30:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:16.121 22:30:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:16.121 22:30:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:16.121 22:30:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:16.122 22:30:10 -- nvmf/common.sh@294 -- # net_devs=() 00:31:16.122 22:30:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:16.122 22:30:10 -- nvmf/common.sh@295 -- # e810=() 00:31:16.122 22:30:10 -- nvmf/common.sh@295 -- # local -ga e810 00:31:16.122 22:30:10 -- nvmf/common.sh@296 -- # x722=() 00:31:16.122 22:30:10 -- nvmf/common.sh@296 -- # local -ga x722 00:31:16.122 22:30:10 -- nvmf/common.sh@297 -- # mlx=() 00:31:16.122 22:30:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:16.122 22:30:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.122 22:30:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:16.122 22:30:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:16.122 22:30:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:16.122 22:30:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:16.122 22:30:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:16.122 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:16.122 22:30:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:16.122 22:30:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:16.122 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:16.122 22:30:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:16.122 22:30:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:16.122 22:30:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.122 22:30:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:16.122 22:30:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.122 22:30:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:16.122 Found net devices under 0000:86:00.0: cvl_0_0 00:31:16.122 22:30:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.122 22:30:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:16.122 22:30:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.122 22:30:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:16.122 22:30:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.122 22:30:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:16.122 Found net devices under 0000:86:00.1: cvl_0_1 00:31:16.122 22:30:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.122 22:30:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:16.122 22:30:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:16.122 22:30:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:16.122 22:30:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.122 22:30:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.122 22:30:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.122 22:30:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:16.122 22:30:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.122 22:30:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.122 22:30:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:16.122 22:30:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.122 22:30:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.122 22:30:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:16.122 22:30:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:16.122 22:30:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.122 22:30:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.122 22:30:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.122 22:30:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.122 22:30:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:16.122 22:30:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.122 22:30:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.122 22:30:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.122 22:30:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:16.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:31:16.122 00:31:16.122 --- 10.0.0.2 ping statistics --- 00:31:16.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.122 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:31:16.122 22:30:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:31:16.122 00:31:16.122 --- 10.0.0.1 ping statistics --- 00:31:16.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.122 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:31:16.122 22:30:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.122 22:30:10 -- nvmf/common.sh@410 -- # return 0 00:31:16.122 22:30:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:16.122 22:30:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.122 22:30:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:16.122 22:30:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.122 22:30:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:16.122 22:30:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:16.122 22:30:10 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:31:16.122 22:30:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:16.122 22:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:16.122 22:30:10 -- common/autotest_common.sh@10 -- # set +x 00:31:16.122 ************************************ 00:31:16.122 START TEST nvmf_target_disconnect_tc1 00:31:16.122 ************************************ 00:31:16.122 22:30:10 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:31:16.122 22:30:10 -- host/target_disconnect.sh@32 -- # set +e 00:31:16.122 22:30:10 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:16.122 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.122 [2024-07-24 22:30:11.054323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.122 [2024-07-24 22:30:11.054784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.122 [2024-07-24 22:30:11.054798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5a5b20 with addr=10.0.0.2, port=4420 00:31:16.122 [2024-07-24 22:30:11.054819] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:16.122 [2024-07-24 22:30:11.054836] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:16.122 [2024-07-24 22:30:11.054843] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:31:16.122 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:31:16.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:31:16.122 Initializing NVMe Controllers 00:31:16.122 22:30:11 -- host/target_disconnect.sh@33 -- # trap - ERR 00:31:16.122 22:30:11 -- host/target_disconnect.sh@33 -- # print_backtrace 00:31:16.122 22:30:11 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:31:16.122 22:30:11 -- common/autotest_common.sh@1132 -- # return 0 00:31:16.122 22:30:11 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:31:16.122 22:30:11 -- host/target_disconnect.sh@41 -- # set -e 00:31:16.122 00:31:16.122 real 0m0.095s 00:31:16.122 user 0m0.041s 00:31:16.122 sys 0m0.054s 00:31:16.122 22:30:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:16.122 22:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.122 ************************************ 00:31:16.122 END TEST nvmf_target_disconnect_tc1 00:31:16.123 ************************************ 00:31:16.123 22:30:11 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:31:16.123 22:30:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:16.123 22:30:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:16.123 22:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.123 ************************************ 00:31:16.123 START TEST nvmf_target_disconnect_tc2 00:31:16.123 ************************************ 00:31:16.123 22:30:11 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:31:16.123 22:30:11 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:31:16.123 22:30:11 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:16.123 22:30:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:16.123 22:30:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:16.123 22:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.123 22:30:11 -- nvmf/common.sh@469 -- # nvmfpid=3744919 00:31:16.123 22:30:11 -- nvmf/common.sh@470 -- # waitforlisten 3744919 00:31:16.123 22:30:11 -- common/autotest_common.sh@819 -- # '[' -z 3744919 ']' 00:31:16.123 22:30:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.123 22:30:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:16.123 22:30:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.123 22:30:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:16.123 22:30:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:16.123 22:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.123 [2024-07-24 22:30:11.150959] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:31:16.123 [2024-07-24 22:30:11.151004] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.123 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.123 [2024-07-24 22:30:11.221847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:16.382 [2024-07-24 22:30:11.263594] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:16.382 [2024-07-24 22:30:11.263700] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.382 [2024-07-24 22:30:11.263708] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.382 [2024-07-24 22:30:11.263715] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.382 [2024-07-24 22:30:11.263830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:16.382 [2024-07-24 22:30:11.263941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:16.382 [2024-07-24 22:30:11.264051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:16.382 [2024-07-24 22:30:11.264069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:16.948 22:30:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:16.948 22:30:11 -- common/autotest_common.sh@852 -- # return 0 00:31:16.948 22:30:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:16.948 22:30:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:16.948 22:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.948 22:30:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.948 22:30:11 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:16.948 22:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.948 22:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.948 Malloc0 00:31:16.948 22:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.948 22:30:11 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:16.948 22:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.948 22:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.948 [2024-07-24 22:30:11.987019] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.948 22:30:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.948 22:30:11 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.948 22:30:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.948 22:30:11 -- common/autotest_common.sh@10 -- # set +x 00:31:16.948 22:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.948 22:30:12 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.948 22:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.948 22:30:12 -- common/autotest_common.sh@10 -- # set +x 00:31:16.948 22:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.948 22:30:12 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.948 22:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.948 22:30:12 -- common/autotest_common.sh@10 -- # set +x 00:31:16.948 [2024-07-24 22:30:12.015276] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.948 22:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.948 22:30:12 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:16.948 22:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:16.948 22:30:12 -- common/autotest_common.sh@10 -- # set +x 00:31:16.948 22:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:16.948 22:30:12 -- host/target_disconnect.sh@50 -- # reconnectpid=3745081 00:31:16.948 22:30:12 -- host/target_disconnect.sh@52 -- # sleep 2 00:31:16.948 22:30:12 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:16.948 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.495 22:30:14 -- host/target_disconnect.sh@53 -- # kill -9 3744919 00:31:19.495 22:30:14 -- host/target_disconnect.sh@55 -- # sleep 2 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Write completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Write completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Write completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Write completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Write completed with error (sct=0, sc=8) 00:31:19.495 starting I/O failed 00:31:19.495 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 [2024-07-24 22:30:14.040371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 [2024-07-24 22:30:14.040575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 [2024-07-24 22:30:14.040791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Write completed with error (sct=0, sc=8) 00:31:19.496 starting I/O failed 00:31:19.496 Read completed with error (sct=0, sc=8) 00:31:19.497 starting I/O failed 00:31:19.497 Read completed with error (sct=0, sc=8) 00:31:19.497 starting I/O failed 00:31:19.497 Write completed with error (sct=0, sc=8) 00:31:19.497 starting I/O failed 00:31:19.497 Read completed with error (sct=0, sc=8) 00:31:19.497 starting I/O failed 00:31:19.497 Read completed with error (sct=0, sc=8) 00:31:19.497 starting I/O failed 00:31:19.497 Write completed with error (sct=0, sc=8) 00:31:19.497 starting I/O failed 00:31:19.497 Write completed with error (sct=0, sc=8) 00:31:19.497 starting I/O failed 00:31:19.497 [2024-07-24 22:30:14.040975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:19.497 [2024-07-24 22:30:14.041244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.041657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.041692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.042114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.042560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.042590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.042985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.043384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.043414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.043811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.044259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.044269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.044674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.045027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.045069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.045455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.045736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.045765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.046169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.046569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.046598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.046788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.047171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.047201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.047502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.047930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.047959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.048336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.048681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.048694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.049118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.049544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.049557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.049950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.050358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.050371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.050835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.051177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.051190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.051530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.052004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.052033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.052405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.052846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.052876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.053332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.053759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.053788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.054284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.054825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.054854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.055298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.055809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.055839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.056282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.056573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.056602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.056797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.057169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.057199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.057638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.058123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.058153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.058597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.059033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.059088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.059793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.060060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.060091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.060540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.061035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.061078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.061457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.061938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.061967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.497 qpair failed and we were unable to recover it. 00:31:19.497 [2024-07-24 22:30:14.062353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.497 [2024-07-24 22:30:14.062787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.062816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.063271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.063700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.063728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.064220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.064605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.064619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.065022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.065428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.065458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.065837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.066217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.066247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.066690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.067113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.067143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.067540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.067967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.067996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.068439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.068905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.068934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.069332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.069704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.069733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.070187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.070696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.070725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.071144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.071584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.071613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.072103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.072512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.072541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.072908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.073416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.073446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.073939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.074371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.074401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.074831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.075266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.075296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.075660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.076108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.076137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.076601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.077040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.077075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.077512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.077918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.077948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.078469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.078954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.078989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.079191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.079607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.079636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.080088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.080602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.080631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.081022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.081516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.081545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.081984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.082417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.082447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.082975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.083356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.083386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.083872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.084331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.084361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.084799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.085243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.085274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.085662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.086059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.086089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.498 [2024-07-24 22:30:14.086565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.086931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.498 [2024-07-24 22:30:14.086959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.498 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.087398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.087883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.087913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.088296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.088702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.088731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.089171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.089590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.089620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.090040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.090661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.090690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.091172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.091591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.091620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.092130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.092518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.092547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.092935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.093424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.093455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.093833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.094210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.094223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.094697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.095092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.095105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.095455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.095859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.095872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.096222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.096571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.096584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.096990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.097391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.097405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.097828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.098161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.098175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.098571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.098781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.098794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.099134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.099588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.099601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.100010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.100462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.100475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.100837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.101233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.101246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.101702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.102026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.102039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.102383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.102718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.102731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.103153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.103576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.103589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.103930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.104329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.104353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.104830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.105178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.105191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.105600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.105937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.105950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.106355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.106746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.106759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.107156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.107634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.107647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.108053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.108388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.108401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.499 qpair failed and we were unable to recover it. 00:31:19.499 [2024-07-24 22:30:14.108820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.109215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.499 [2024-07-24 22:30:14.109228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.109631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.109979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.109992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.110393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.110831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.110844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.111252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.111742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.111755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.112210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.112548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.112562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.112986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.113409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.113422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.113824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.114236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.114249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.114678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.115014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.115027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.115529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.115979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.115992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.116329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.116682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.116694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.117173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.117652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.117665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.118057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.118531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.118544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.119025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.119511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.119525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.119766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.120184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.120197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.120605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.121059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.121073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.121504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.121838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.121851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.122277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.122697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.122710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.123110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.123587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.123600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.123998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.124427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.124440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.124779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.125178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.125191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.125612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.125814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.125827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.500 qpair failed and we were unable to recover it. 00:31:19.500 [2024-07-24 22:30:14.126280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.126782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.500 [2024-07-24 22:30:14.126795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.127201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.127678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.127691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.128167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.128569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.128582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.128981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.129146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.129162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.129491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.129841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.129854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.130261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.130670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.130683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.131033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.131509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.131522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.131719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.132127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.132140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.132481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.132957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.132970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.133447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.133899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.133912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.134392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.134867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.134880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.135290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.135623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.135636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.136088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.136502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.136515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.136950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.137400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.137416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.137836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.138309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.138322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.138730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.139206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.139219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.139613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.140067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.140081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.140554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.140955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.140968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.141365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.141586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.141598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.142055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.142458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.142471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.142899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.143314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.143328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.143804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.144256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.144269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.144681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.145156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.145169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.145573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.146049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.146065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.146307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.146722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.146735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.147234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.147638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.147651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.148072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.148543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.148556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.148951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.149428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.149441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.501 qpair failed and we were unable to recover it. 00:31:19.501 [2024-07-24 22:30:14.149916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.501 [2024-07-24 22:30:14.150303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.150316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.150514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.150995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.151008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.151488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.151834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.151847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.152066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.152531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.152545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.152944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.153416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.153430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.153832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.154282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.154297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.154506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.154902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.154916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.155249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.155650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.155663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.156136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.156609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.156622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.157104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.157527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.157541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.157926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.158331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.158345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.158821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.159159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.159172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.159662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.160140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.160154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.160638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.160881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.160894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.161348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.161767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.161780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.162120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.162590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.162604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.163062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.163449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.163461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.163939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.164343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.164356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.164742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.165061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.165074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.165549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.166021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.166034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.166529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.166930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.166943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.167348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.167869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.167898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.168431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.168921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.168950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.169373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.169879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.169908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.502 [2024-07-24 22:30:14.170155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.170595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.502 [2024-07-24 22:30:14.170625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.502 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.171075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.171453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.171466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.171811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.172249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.172278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.172652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.173060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.173090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.173476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.173963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.173993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.174424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.174910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.174939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.175364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.175728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.175757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.176223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.176732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.176761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.177235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.177623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.177651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.178161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.178600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.178629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.179010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.179392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.179422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.179811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.180166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.180179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.180530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.180970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.181000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.181458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.181946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.181975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.182444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.182836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.182865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.183309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.183758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.183787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.184267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.184757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.184785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.185224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.185587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.185616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.186061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.186419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.186447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.186815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.187182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.187211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.187641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.188073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.188103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.188490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.188948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.188977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.189359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.189783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.189812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.190420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.190862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.190891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.191131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.191522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.191551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.191931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.192392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.192421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.192803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.193246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.193277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.193658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.194161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.194190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.503 [2024-07-24 22:30:14.194653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.195033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.503 [2024-07-24 22:30:14.195072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.503 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.195445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.195883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.195912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.196343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.196710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.196739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.197178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.197557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.197586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.197982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.198423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.198452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.198809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.199080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.199095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.199516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.199878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.199907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.200344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.200772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.200801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.201289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.201729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.201758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.202214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.202643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.202671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.203098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.203440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.203469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.203900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.204291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.204321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.204759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.205159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.205188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.205709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.206325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.206354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.206812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.207244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.207257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.207606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.208030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.208065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.208555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.208928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.208958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.209399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.209643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.209672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.210114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.210469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.210497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.210857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.211241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.211270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.211702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.212141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.212171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.212616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.213053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.213083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.213549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.213979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.214008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.214533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.214902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.214931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.215369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.215880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.215893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.216389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.216742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.216755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.217161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.217643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.504 [2024-07-24 22:30:14.217673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.504 qpair failed and we were unable to recover it. 00:31:19.504 [2024-07-24 22:30:14.218116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.218574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.218603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.219036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.219726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.219756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.220249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.220677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.220707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.221148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.221644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.221673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.222055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.222532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.222561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.222935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.223370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.223401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.223820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.224228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.224242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.224646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.225010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.225039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.225429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.225857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.225886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.226310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.226762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.226790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.227350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.227738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.227767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.228064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.228574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.228603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.228978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.229406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.229436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.229927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.230331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.230361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.230798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.231161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.231191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.231652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.232317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.232347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.232784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.233064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.233094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.233489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.233882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.233912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.234400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.234833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.234862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.235302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.235770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.235799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.236258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.236900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.236929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.505 qpair failed and we were unable to recover it. 00:31:19.505 [2024-07-24 22:30:14.237315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.505 [2024-07-24 22:30:14.237680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.237709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.238222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.238654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.238683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.239110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.239526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.239556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.239986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.240365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.240395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.240768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.241256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.241286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.241798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.242240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.242270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.242733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.243173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.243203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.243578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.244007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.244035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.244533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.244968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.244997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.245375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.245819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.245832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.246244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.246649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.246678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.247111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.247593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.247623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.248065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.248500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.248529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.248911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.249282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.249312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.249699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.250079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.250112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.250582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.251002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.251031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.251417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.251806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.251819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.251974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.252324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.252338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.252742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.253175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.253204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.253715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.254156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.254186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.254588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.255036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.255072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.255267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.255700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.255729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.256124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.256547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.256576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.257008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.257389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.257418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.257849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.258267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.258297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.258790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.259299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.259329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.506 [2024-07-24 22:30:14.259692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.260153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.506 [2024-07-24 22:30:14.260170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.506 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.260576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.261006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.261034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.261474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.261836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.261864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.262237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.262617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.262646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.263109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.263477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.263513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.263845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.264175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.264189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.264535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.264994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.265023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.265473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.265848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.265876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.266309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.266754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.266783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.267212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.267665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.267694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.268208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.268637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.268672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.269055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.269417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.269446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.269887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.270407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.270439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.270868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.271133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.271163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.271617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.271998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.272027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.272559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.272988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.273017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.273536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.274024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.274068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.274509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.274946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.274958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.275366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.275864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.275892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.276313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.276650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.276679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.277171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.277603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.277637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.278087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.278535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.278564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.279084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.279505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.279534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.279984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.280420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.280450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.280945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.281303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.281332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.281760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.282193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.282223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.282677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.283123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.283153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.283678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.283912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.507 [2024-07-24 22:30:14.283925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.507 qpair failed and we were unable to recover it. 00:31:19.507 [2024-07-24 22:30:14.284332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.284799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.284828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.285343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.285832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.285861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.286247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.286702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.286736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.287151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.287627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.287639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.288057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.288489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.288518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.288896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.289312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.289342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.289775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.290162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.290176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.290390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.290742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.290755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.291393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.292832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.292858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.293285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.293734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.293748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.294158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.294594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.294623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.295097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.296006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.296033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.296503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.296890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.296904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.297315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.297705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.297719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.298082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.298571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.298584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.298990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.299465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.299479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.299688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.300088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.300101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.300444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.300789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.300802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.301212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.301558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.301571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.301975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.302373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.302386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.302808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.303099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.303113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.303520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.303924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.303937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.304284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.304784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.304797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.305279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.305686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.305700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.306060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.306467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.306480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.306816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.307151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.307165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.307572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.308023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.308036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.508 qpair failed and we were unable to recover it. 00:31:19.508 [2024-07-24 22:30:14.308523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.309002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.508 [2024-07-24 22:30:14.309015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.309729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.310311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.310325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.310733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.311147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.311161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.311561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.311772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.311785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.312243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.312644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.312657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.313068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.313563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.313576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.313789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.314290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.314304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.314731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.314975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.314988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.315394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.315795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.315808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.316286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.316615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.316628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.317109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.317449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.317462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.317799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.318137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.318151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.318553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.319031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.319049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.319450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.319871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.319884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.320338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.320680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.320693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.321175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.321627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.321640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.322125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.322468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.322481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.322948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.323636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.323651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.324077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.324423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.324436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.324865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.325316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.325329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.325809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.326213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.326227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.509 qpair failed and we were unable to recover it. 00:31:19.509 [2024-07-24 22:30:14.326631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.326969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.509 [2024-07-24 22:30:14.326982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.327438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.327919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.327932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.328348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.328998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.329011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.329408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.329883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.329895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.330350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.330685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.330698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.331182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.331489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.331502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.331897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.332277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.332290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.332694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.333114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.333127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.333588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.333907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.333920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.334322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.334772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.334786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.335195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.335621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.335634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.336054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.336464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.336495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.336925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.337431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.337461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.337962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.338468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.338498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.338925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.339384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.339414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.339777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.340207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.340220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.340625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.341114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.341144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.341340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.341830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.341859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.342252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.342614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.510 [2024-07-24 22:30:14.342626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.510 qpair failed and we were unable to recover it. 00:31:19.510 [2024-07-24 22:30:14.343067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.343599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.343627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.344017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.344464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.344494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.344859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.345227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.345258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.345702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.346210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.346240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.346733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.347212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.347226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.347630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.347978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.347991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.348424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.348782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.348811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.349248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.349634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.349663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.350107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.350613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.350641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.351081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.351514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.351543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.352061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.352567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.352596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.352969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.353383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.353413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.353925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.354358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.354389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.354883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.355374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.355404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.511 qpair failed and we were unable to recover it. 00:31:19.511 [2024-07-24 22:30:14.355835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.511 [2024-07-24 22:30:14.356199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.356229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.356605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.357056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.357086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.357585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.358031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.358081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.358569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.358991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.359020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.359456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.359891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.359920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.360394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.360817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.360846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.361277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.361656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.361686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.362212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.362723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.362752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.363245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.363680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.363709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.364223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.364652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.364680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.365125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.365632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.365661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.366009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.366404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.366418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.366910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.367377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.367391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.367845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.368230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.368244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.368726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.369120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.369133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.369613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.370017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.370029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.370433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.370829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.370842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.371261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.371772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.371801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.372318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.372833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.372873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.512 qpair failed and we were unable to recover it. 00:31:19.512 [2024-07-24 22:30:14.373227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.512 [2024-07-24 22:30:14.373757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.373795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.374206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.374705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.374735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.375251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.375653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.375666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.376125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.376562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.376591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.377028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.377551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.377581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.378006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.378339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.378369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.378879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.379323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.379352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.379833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.380240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.380270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.380755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.381209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.381222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.381683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.382165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.382195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.382648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.383031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.383067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.383575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.383910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.383939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.384448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.384882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.384911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.385335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.385783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.385813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.386238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.386746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.386774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.387159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.387668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.387697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.388132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.388568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.388597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.389054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.389480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.389510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.513 [2024-07-24 22:30:14.389939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.390179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.513 [2024-07-24 22:30:14.390209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.513 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.390634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.391142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.391173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.391653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.392099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.392129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.392645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.393060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.393090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.393515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.393954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.393983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.394471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.394896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.394938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.395392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.395845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.395874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.396390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.396849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.396878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.397250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.397490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.397519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.398032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.398521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.398550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.399040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.399480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.399509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.399998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.400487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.400519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.400957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.401428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.401442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.401793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.402219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.402249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.402686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.403040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.403077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.403561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.404051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.404086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.404531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.404951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.404979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.405407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.405852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.405881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.406371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.406875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.406904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.514 qpair failed and we were unable to recover it. 00:31:19.514 [2024-07-24 22:30:14.407366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.514 [2024-07-24 22:30:14.407821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.407850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.408239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.408665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.408694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.409123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.409551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.409580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.410090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.410573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.410603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.411041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.411537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.411566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.412072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.412407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.412436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.412812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.413198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.413233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.413726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.414234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.414264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.414776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.415290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.415320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.415843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.416278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.416322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.416712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.416982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.417011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.417541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.418061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.418091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.418578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.419084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.419115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.419544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.420027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.420063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.420549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.421023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.421059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.421553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.421983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.422013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.422452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.422881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.422916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.423194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.423704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.423732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.424112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.424619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.424648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.515 [2024-07-24 22:30:14.425159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.425577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.515 [2024-07-24 22:30:14.425606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.515 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.425994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.426450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.426464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.426945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.427458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.427488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.427928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.428353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.428382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.428895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.429427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.429457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.429834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.430273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.430303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.430836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.431332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.431362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.431803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.432313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.432349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.432774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.433157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.433188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.433705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.434161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.434191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.434560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.434995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.435023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.435513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.436041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.436081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.436367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.436789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.436819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.437257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.437743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.437772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.438212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.438564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.438593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.439024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.439404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.439434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.439670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.440174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.440204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.440729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.441211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.441241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.441515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.441949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.441978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.516 [2024-07-24 22:30:14.442491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.443016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.516 [2024-07-24 22:30:14.443051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.516 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.443564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.444005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.444034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.444555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.445017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.445055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.445568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.445953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.445983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.446448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.446938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.446967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.447459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.447883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.447911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.448280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.448695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.448724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.449151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.449554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.449584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.450021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.450509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.450538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.450977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.451343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.451373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.451888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.452395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.452426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.452918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.453430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.453460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.453971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.454479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.454509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.455032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.455602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.455632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.456121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.456300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.456329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.456842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.457345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.457375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.457864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.458376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.458406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.458917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.459432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.459462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.459961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.460203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.460233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.460659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.461163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.461194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.461587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.462027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.462062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.517 qpair failed and we were unable to recover it. 00:31:19.517 [2024-07-24 22:30:14.462596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.462965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.517 [2024-07-24 22:30:14.462994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.463443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.463960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.463989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.464425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.464916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.464944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.465375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.465796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.465825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.466267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.466711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.466740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.467172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.467649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.467678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.468111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.468544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.468573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.469025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.469464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.469493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.469953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.470394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.470424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.470962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.471398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.471428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.471940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.472375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.472406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.472918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.473404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.473434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.473922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.474417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.474447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.474770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.475202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.475232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.475743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.476248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.476261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.476621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.477062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.477092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.477582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.477999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.478028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.478561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.479064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.479094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.479643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.480146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.480178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.480618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.481140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.481170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.518 qpair failed and we were unable to recover it. 00:31:19.518 [2024-07-24 22:30:14.481612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.482128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.518 [2024-07-24 22:30:14.482158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.482648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.483147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.483178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.483620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.483924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.483955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.484450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.484960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.484989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.485459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.485970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.485999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.486518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.486885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.486914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.487372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.487880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.487908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.488448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.488939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.488968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.489470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.489892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.489921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.490432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.490939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.490968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.491510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.491979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.492013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.492500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.493036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.493073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.493507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.494035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.494072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.494564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.495056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.495086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.495616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.496131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.496162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.496696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.497160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.497190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.497679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.498207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.498238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.498795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.499256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.499286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.499890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.500460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.500505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.501012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.501531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.501570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.502095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.502658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.502697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.503248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.503765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.503803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.504369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.504906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.504944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.505518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.506060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.506098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.506666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.507210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.507248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.507819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.508399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.508437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.508943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.509397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.509436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.509939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.510507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.510547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.519 [2024-07-24 22:30:14.511149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.511614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.519 [2024-07-24 22:30:14.511652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.519 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.512195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.512694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.512732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.513273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.513824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.513862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.514396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.514926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.514964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.515543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.516130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.516169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.516742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.517306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.517323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.517749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.518240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.518280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.518832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.519304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.519343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.519903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.520354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.520407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.521017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.521557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.521596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.522092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.522507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.522546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.523091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.523663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.523702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.524267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.524751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.524789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.525376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.525922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.525960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.526532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.527089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.527128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.527707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.528226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.528265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.528859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.529431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.529470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.530012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.530542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.530581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.531147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.531604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.531642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.532193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.532714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.532752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.533247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.533774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.533815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.534288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.534706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.534745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.535226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.535762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.535799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.536390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.536942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.536979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.537474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.537985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.538016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.538547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.539031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.539090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.539625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.540156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.540196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.540686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.541224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.541261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.541679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.542169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.542185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.520 qpair failed and we were unable to recover it. 00:31:19.520 [2024-07-24 22:30:14.542688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.520 [2024-07-24 22:30:14.543190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.543207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.543629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.544070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.544087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.544594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.545136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.545171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.545674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.546238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.546277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.546803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.547311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.547350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.547889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.548319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.548337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.548790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.549273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.549312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.549801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.550340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.550378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.550922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.551384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.551423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.551995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.552554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.552571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.553071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.553487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.553525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.554076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.554645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.554684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.555223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.555759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.555799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.556350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.556760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.556799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.557271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.557804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.557842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.558328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.558853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.558869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.559301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.559781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.559819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.560318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.560844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.560861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.561320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.561879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.561917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.562380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.562901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.562918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.563345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.563714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.563731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.564252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.564800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.564846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.565320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.565741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.565781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.566251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.566782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.566798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.567188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.567609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.567647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.568205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.568626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.568664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.569208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.569786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.569824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.570379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.570933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.570971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.521 qpair failed and we were unable to recover it. 00:31:19.521 [2024-07-24 22:30:14.571478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.571999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.521 [2024-07-24 22:30:14.572037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.572625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.573159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.573199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.573780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.574186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.574224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.574779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.575317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.575362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.575914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.576423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.576461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.577003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.577488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.577527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.578006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.578504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.578543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.579110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.579647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.579685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.580274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.580746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.580784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.581381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.581870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.581908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.582434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.583030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.583090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.583631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.584173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.584212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.584786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.585360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.585399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.585982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.586515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.586561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.587160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.587740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.587779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.588334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.588867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.588905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.589471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.590000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.590039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.590501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.591013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.591064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.591670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.592169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.592209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.592765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.593304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.593343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.593909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.594418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.594457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.595062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.595623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.595661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.596251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.596778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.596816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.522 qpair failed and we were unable to recover it. 00:31:19.522 [2024-07-24 22:30:14.597413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.522 [2024-07-24 22:30:14.597987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.598033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.598549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.599090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.599129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.599686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.600227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.600267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.600850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.601423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.601462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.602057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.602628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.602667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.603229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.603658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.603696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.604292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.604806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.604844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.605424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.605884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.605922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.606483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.607056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.607094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.607637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.608176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.608216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.608797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.609258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.609297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.609870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.610452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.610470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.610967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.611520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.611560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.612108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.612628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.612645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.613071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.613489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.613506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.613945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.614419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.523 [2024-07-24 22:30:14.614437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.523 qpair failed and we were unable to recover it. 00:31:19.523 [2024-07-24 22:30:14.614922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.615345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.615363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.789 qpair failed and we were unable to recover it. 00:31:19.789 [2024-07-24 22:30:14.615816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.616255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.616272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.789 qpair failed and we were unable to recover it. 00:31:19.789 [2024-07-24 22:30:14.616731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.617272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.617290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.789 qpair failed and we were unable to recover it. 00:31:19.789 [2024-07-24 22:30:14.617821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.618352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.618392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.789 qpair failed and we were unable to recover it. 00:31:19.789 [2024-07-24 22:30:14.618987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.619577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.619595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.789 qpair failed and we were unable to recover it. 00:31:19.789 [2024-07-24 22:30:14.620109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.620717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.789 [2024-07-24 22:30:14.620755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.621320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.621888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.621926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.622498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.623018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.623071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.623658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.624195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.624256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.624822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.625311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.625349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.625910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.626483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.626522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.627065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.627604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.627641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.628207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.628750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.628788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.629265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.629818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.629855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.630447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.630924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.630962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.631472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.632053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.632072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.632619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.633141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.633181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.633741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.634231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.634270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.634834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.635421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.635461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.636040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.636596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.636634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.637133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.637678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.637717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.638298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.638868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.638905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.639471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.639993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.640031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.640650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.641143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.641181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.641751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.642348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.642387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.642975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.643483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.643523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.644105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.644638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.644677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.645211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.645756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.645793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.646385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.646915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.646952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.647515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.648035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.648084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.648686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.649206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.649246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.649805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.650403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.650442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.650949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.651484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.651502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.790 [2024-07-24 22:30:14.652083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.652632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.790 [2024-07-24 22:30:14.652671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.790 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.653251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.653790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.653829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.654344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.654886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.654924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.655510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.655983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.656021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.656592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.657086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.657125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.657681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.658224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.658263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.658812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.659234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.659252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.659698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.660221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.660260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.660838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.661390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.661429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.662010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.662572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.662611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.663088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.663635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.663673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.664255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.664829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.664867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.665496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.666070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.666109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.666669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.667167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.667206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.667806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.668232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.668271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.668836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.669306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.669346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.669903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.670476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.670515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.671073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.671587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.671626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.672214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.672692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.672731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.673293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.673862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.673900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.674497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.675039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.675089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.675577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.676133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.676173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.676654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.677150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.677190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.677782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.678306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.678345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.678902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.679440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.679479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.680031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.680589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.680626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.681221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.681788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.681827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.682387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.682939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.682978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.683586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.684155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.791 [2024-07-24 22:30:14.684195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.791 qpair failed and we were unable to recover it. 00:31:19.791 [2024-07-24 22:30:14.684757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.685296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.685335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.685900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.686319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.686358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.686895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.687450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.687488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.688062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.688521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.688559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.689039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.689518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.689556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.690086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.690634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.690672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.691230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.691702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.691740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.692295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.692803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.692842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.693425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.693976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.694013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.694586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.695131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.695170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.695734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.696276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.696315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.696895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.697372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.697411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.698020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.698607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.698647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.699239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.699792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.699830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.700417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.700931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.700969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.701541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.702066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.702105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.702663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.703241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.703280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.703869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.704401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.704440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.705002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.705575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.705614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.706182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.706630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.706669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.707173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.707647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.707685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.708239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.708819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.708858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.709436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.710022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.710070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.710584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.711162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.711201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.711758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.712285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.712323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.712929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.713405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.713444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.714008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.714592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.714610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.715124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.715560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.715598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.792 qpair failed and we were unable to recover it. 00:31:19.792 [2024-07-24 22:30:14.716151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.716659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.792 [2024-07-24 22:30:14.716696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.717260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.717814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.717852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.718450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.718943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.718981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.719565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.720139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.720178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.720762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.721320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.721360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.721830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.722417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.722456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.722992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.723544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.723584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.724170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.724736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.724775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.725345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.725886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.725924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.726507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.727060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.727099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.727676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.728215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.728268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.728854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.729338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.729377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.729962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.730533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.730573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.731160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.731709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.731748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.732356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.732918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.732956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.733557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.734145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.734192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.734777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.735299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.735338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.735946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.736518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.736557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.737100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.737660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.737698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.738265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.738807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.738845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.739412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.739953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.739992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.740501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.741068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.741107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.741672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.742244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.742284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.742844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.743383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.743421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.743987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.744418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.744459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.745016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.745599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.745646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.746203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.746688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.746727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.747277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.747803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.747842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.793 qpair failed and we were unable to recover it. 00:31:19.793 [2024-07-24 22:30:14.748450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.748875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.793 [2024-07-24 22:30:14.748913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.749461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.750069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.750108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.750634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.751182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.751220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.751774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.752225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.752264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.752772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.753316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.753355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.753956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.754490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.754530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.755094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.755591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.755629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.756172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.756707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.756754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.757342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.757910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.757949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.758512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.759023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.759072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.759616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.760171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.760211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.760794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.761212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.761251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.761748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.762211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.762228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.762894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.763455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.763497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.764004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.764561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.764601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.765187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.765721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.765759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.766242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.766764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.766803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.767612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.768187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.768235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.768813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.769264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.769282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.769665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.770212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.770252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.770809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.771272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.771310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.771824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.772347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.772387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.772872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.773384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.773423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.794 qpair failed and we were unable to recover it. 00:31:19.794 [2024-07-24 22:30:14.773956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.794 [2024-07-24 22:30:14.774430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.774473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.775041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.775582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.775621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.776223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.776730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.776768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.777314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.777865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.777903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.778511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.779002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.779040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.779494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.780030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.780080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.780565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.781064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.781104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.781658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.782225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.782264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.782755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.783276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.783315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.783854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.784371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.784410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.784930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.785455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.785494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.786010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.786825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.786845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.787388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.787870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.787887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.788333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.788713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.788730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.789248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.789762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.789800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.790308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.790779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.790818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.791399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.791831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.791849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.792364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.792922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.792960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.793525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.794068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.794087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.794603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.795124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.795163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.795659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.796114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.796131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.796638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.797121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.797160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.797660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.798124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.798141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.798600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.799170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.799210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.799748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.800306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.800346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.800913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.801466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.801505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.801982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.802489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.802529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.795 qpair failed and we were unable to recover it. 00:31:19.795 [2024-07-24 22:30:14.803075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.795 [2024-07-24 22:30:14.803547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.803564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.804086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.804577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.804616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.805128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.805626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.805664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.806244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.806606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.806644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.807176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.807601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.807640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.808187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.808655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.808693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.809260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.809794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.809832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.810361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.810860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.810899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.811499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.811962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.812000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.812580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.813009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.813060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.813618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.814085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.814125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.814746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.815268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.815307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.815880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.816355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.816373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.816660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.817156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.817174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.817703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.818248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.818287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.818785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.819306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.819354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.819889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.820441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.820480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.821056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.821603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.821641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.822213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.822744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.822783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.823371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.823943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.823981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.824563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.825105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.825144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.825730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.826330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.826369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.826956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.827487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.827527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.828125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.828655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.828694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.829275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.829820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.829858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.830452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.831017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.831069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.831615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.832135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.832188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.832713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.833255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.796 [2024-07-24 22:30:14.833294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.796 qpair failed and we were unable to recover it. 00:31:19.796 [2024-07-24 22:30:14.833880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.834402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.834440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.834948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.835397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.835436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.835998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.836593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.836633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.837130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.837605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.837644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.838255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.838810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.838848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.839406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.839854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.839892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.840472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.840947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.840965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.841483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.842081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.842121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.842691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.843255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.843272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.843791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.844276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.844314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.844884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.845443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.845482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.846040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.846597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.846635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.847220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.847693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.847731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.848288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.848857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.848895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.849482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.849942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.849980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.850534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.850967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.850984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.851500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.852067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.852105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.852693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.853243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.853283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.853843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.854405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.854444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.855035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.855510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.855548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.856063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.856521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.856539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.857061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.857569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.857587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.858016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.858459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.858477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.858917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.859457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.859496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.860084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.860626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.860664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.861226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.861710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.861748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.862318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.862880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.862919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.797 qpair failed and we were unable to recover it. 00:31:19.797 [2024-07-24 22:30:14.863508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.863979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.797 [2024-07-24 22:30:14.864016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.864579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.865161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.865200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.865727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.866148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.866188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.866741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.867306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.867325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.867827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.868324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.868362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.868947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.869434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.869474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.869958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.870431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.870470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.871009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.871570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.871609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.872118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.872666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.872705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.873269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.873827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.873865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.874376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.874920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.874959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.875541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.876189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.876229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.876745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.877294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.877333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.877771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.878249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.878288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.878860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.879404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.879443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.880002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.880522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.880561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.881041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.881618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.881658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.882251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.882817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.882859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.883361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.883958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.883996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.884607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.885169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.885207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.885798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.886364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.886404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.886914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.887444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.887482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.888077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.888619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.888657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.889215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.889794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.889833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.890411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.890884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.890928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.891471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.892039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.892100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.892634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.893118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.893151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.893558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.894120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.894160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.798 qpair failed and we were unable to recover it. 00:31:19.798 [2024-07-24 22:30:14.894749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.798 [2024-07-24 22:30:14.895283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.895323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.895880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.896371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.896409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.896970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.897549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.897589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.898178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.898709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.898747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.899245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.899787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.899825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.900412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.900982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.901022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.901627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.902192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.902233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.902718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.903129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.903169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.903746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.904313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.904352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.904939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.905471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.905511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.906076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.906647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.906685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.907264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.907727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.907766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.908338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.908900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.908947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.909435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.910003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.910041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.910642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.911203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.911222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.911732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.912237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.912259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.912772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.913278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.913317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.913811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.914345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.914363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.914886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.915387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.915426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:19.799 [2024-07-24 22:30:14.916000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.916447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.799 [2024-07-24 22:30:14.916464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:19.799 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.916988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.917514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.917531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.918017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.918558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.918575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.919085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.919636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.919674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.920242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.920738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.920775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.921265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.921793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.921831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.922439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.923003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.923063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.923641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.924145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.924184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.924673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.925212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.925230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.925759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.926330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.926369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.926946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.927467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.927506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.928112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.928585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.928624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.929178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.929725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.929763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.930320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.930796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.930834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.931330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.931851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.931888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.932465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.933034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.933084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.933696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.934271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.934318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.934834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.935261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.935278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.935787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.936226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.936252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.936753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.937346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.937385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.937952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.938480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.938519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.939123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.939583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.939621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.940109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.940659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.940697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.941278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.941822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.941860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.942363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.942832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.942871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.065 [2024-07-24 22:30:14.943450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.943973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.065 [2024-07-24 22:30:14.944011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.065 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.944486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.945030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.945091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.945676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.946224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.946263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.946850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.947431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.947470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.948033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.948594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.948633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.949217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.949745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.949785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.950354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.950848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.950886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.951445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.952013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.952064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.952612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.953147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.953196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.953738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.954255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.954294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.954852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.955397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.955436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.955998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.956556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.956595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.957195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.957673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.957711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.958272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.958820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.958859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.959465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.960066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.960106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.960665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.961201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.961240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.961827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.962356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.962395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.962930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.963411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.963450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.964064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.964659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.964697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.965176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.965737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.965776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.966352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.966827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.966864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.967473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.968035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.968086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.968676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.969152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.969192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.969753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.970326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.970366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.970949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.971418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.971457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.972068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.972600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.972638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.973154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.973710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.973748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.974324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.974867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.974905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.975415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.975888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.975926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.066 qpair failed and we were unable to recover it. 00:31:20.066 [2024-07-24 22:30:14.976529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.066 [2024-07-24 22:30:14.977101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.977140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.977614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.978143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.978161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.978694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.979251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.979291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.979853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.980370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.980410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.980956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.981553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.981592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.982167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.982687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.982725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.983310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.983863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.983902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.984464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.984987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.985025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.985636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.986208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.986248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.986785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.987331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.987370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.987954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.988472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.988511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.989117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.989719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.989757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.990305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.990849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.990887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.991475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.991980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.992018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.992603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.993202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.993242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.993782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.994324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.994364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.994947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.995488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.995528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.996108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.996584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.996622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.997109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.997644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.997682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.998254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.998722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.998761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:14.999360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.999851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:14.999890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:15.000436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.001033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.001082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:15.001659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.002233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.002272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:15.002889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.003454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.003493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:15.004056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.004500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.004538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:15.005111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.005649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.005687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:15.006253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.006793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.006831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:15.007407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.007936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.007974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.067 qpair failed and we were unable to recover it. 00:31:20.067 [2024-07-24 22:30:15.008487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.009014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.067 [2024-07-24 22:30:15.009062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.009583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.010059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.010098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.010701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.011223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.011263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.011873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.012450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.012488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.013059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.013628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.013665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.014178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.014720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.014758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.015348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.015916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.015955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.016525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.017098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.017137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.017722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.018261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.018300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.018816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.019386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.019425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.019999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.020531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.020569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.021152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.021694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.021733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.022314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.022885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.022924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.023492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.023922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.023940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.024373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.024886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.024924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.025520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.026080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.026120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.026671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.027250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.027289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.027857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.028322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.028362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.028838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.029376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.029421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.029934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.030481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.030520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.031103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.031663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.031700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.032261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.032784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.032822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.033441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.034006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.034067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.034657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.035126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.035165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.035671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.036187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.036239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.036757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.037270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.037288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.037735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.038280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.038320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.038863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.039387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.039428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.068 qpair failed and we were unable to recover it. 00:31:20.068 [2024-07-24 22:30:15.039951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.040454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.068 [2024-07-24 22:30:15.040511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.040969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.041416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.041457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.042234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.042764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.042801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.043283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.043827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.043865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.044379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.044925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.044964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.045458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.045939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.045977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.046564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.047076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.047115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.047625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.048199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.048238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.048834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.049358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.049398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.049885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.050406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.050446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.051028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.051533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.051571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.052106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.052660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.052678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.053197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.053620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.053657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.054161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.054577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.054615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.055171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.055934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.055975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.056571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.057064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.057103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.057603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.058096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.058136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.058694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.059260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.059299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.059787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.060326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.060343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.060869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.061392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.061431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.062021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.062624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.062664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.069 qpair failed and we were unable to recover it. 00:31:20.069 [2024-07-24 22:30:15.063236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.063734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.069 [2024-07-24 22:30:15.063752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.064126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.064612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.064650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.065149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.065693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.065731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.066277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.066819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.066858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.067449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.067850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.067889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.068393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.068851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.068889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.069445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.069948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.069986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.070564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.071068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.071109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.071656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.072215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.072253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.072808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.073117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.073155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.073722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.074263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.074302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.074616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.075136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.075176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.075772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.076376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.076416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.077035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.077589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.077627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.078239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.078783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.078821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.079365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.079824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.079862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.080357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.080775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.080814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.081366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.081862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.081900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.082462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.083061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.083101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.083620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.084182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.084221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.084803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.085362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.085402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.085968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.086501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.086541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.087148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.087567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.087605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.088116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.088659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.088697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.089194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.089657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.089694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.090245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.090759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.090797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.091360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.091917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.091963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.092565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.093101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.093141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.070 qpair failed and we were unable to recover it. 00:31:20.070 [2024-07-24 22:30:15.093641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.070 [2024-07-24 22:30:15.094166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.094205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.094769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.095195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.095235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.095657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.096178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.096217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.096749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.097291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.097330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.097908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.098439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.098478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.098966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.099497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.099537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.100102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.100643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.100681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.101257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.101807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.101846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.102434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.102893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.102939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.103424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.103965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.104004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.104554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.105018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.105035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.105517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.105984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.106023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.106547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.107069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.107109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.107718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.108289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.108328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.108916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.109440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.109478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.110082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.110666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.110715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.111161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.111600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.111638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.112152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.112696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.112734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.113296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.113819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.113864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.114443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.114910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.114947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.115507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.115973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.116011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.116618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.117100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.117138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.117731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.118302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.118340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.118901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.119417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.119455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.120037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.120548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.120595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.121112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.121623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.121662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.122235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.122701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.122739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.123293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.123844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.071 [2024-07-24 22:30:15.123882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.071 qpair failed and we were unable to recover it. 00:31:20.071 [2024-07-24 22:30:15.124467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.124892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.124939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.125458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.125965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.126004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.126501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.127041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.127093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.127679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.128250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.128289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.128852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.129392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.129431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.129980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.130512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.130552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.131166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.131680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.131719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.132270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.132843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.132882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.133492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.133988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.134026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.134614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.135190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.135228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.135817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.136357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.136397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.136973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.137551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.137591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.138139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.138677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.138716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.139304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.139769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.139806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.140367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.140915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.140953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.141515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.142000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.142039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.142610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.143173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.143212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.143773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.144304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.144357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.144964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.145499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.145537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.146077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.146653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.146690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.147252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.147776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.147813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.148412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.149001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.149039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.149643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.150211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.150250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.150814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.151336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.151374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.151936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.152458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.152511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.153062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.153606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.153644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.154233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.154810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.154849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.072 qpair failed and we were unable to recover it. 00:31:20.072 [2024-07-24 22:30:15.155359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.072 [2024-07-24 22:30:15.155830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.155867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.156456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.156983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.157021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.157626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.158095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.158134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.158691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.159273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.159312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.159894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.160462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.160501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.161067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.161635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.161673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.162209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.162779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.162817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.163386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.163919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.163958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.164542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.165062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.165102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.165653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.166146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.166184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.166732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.167228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.167266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.167839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.168391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.168430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.168962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.169432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.169470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.170064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.170584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.170623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.171243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.171713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.171752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.172314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.172772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.172811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.173384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.173900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.173939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.174406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.174927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.174965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.175524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.176061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.176101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.176583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.177125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.177164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.177745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.178306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.178345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.178895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.179439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.179477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.180061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.180597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.180636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.181205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.181756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.181794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.182357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.182914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.182953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.183518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.183997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.184036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.184611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.185101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.185141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.185698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.186181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.073 [2024-07-24 22:30:15.186219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.073 qpair failed and we were unable to recover it. 00:31:20.073 [2024-07-24 22:30:15.186764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.187261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.187299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.074 qpair failed and we were unable to recover it. 00:31:20.074 [2024-07-24 22:30:15.187853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.188299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.188317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.074 qpair failed and we were unable to recover it. 00:31:20.074 [2024-07-24 22:30:15.188818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.189293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.189333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.074 qpair failed and we were unable to recover it. 00:31:20.074 [2024-07-24 22:30:15.189908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.190449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.190467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.074 qpair failed and we were unable to recover it. 00:31:20.074 [2024-07-24 22:30:15.190900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.191363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.191381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.074 qpair failed and we were unable to recover it. 00:31:20.074 [2024-07-24 22:30:15.191939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.192469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.074 [2024-07-24 22:30:15.192488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.074 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.192988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.193564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.193604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.194086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.194598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.194637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.195246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.195758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.195796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.196337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.196934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.196972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.197586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.198167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.198207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.198707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.199254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.199292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.199777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.200321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.200360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.200961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.201538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.201578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.202161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.202631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.202671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.203229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.203807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.203845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.204405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.204948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.204985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.205581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.206148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.206187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.206729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.207272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.207312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.207808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.208282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.208320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.208894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.209412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.209429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.209891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.210392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.210410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.210964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.211479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.211518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.212101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.212674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.340 [2024-07-24 22:30:15.212714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.340 qpair failed and we were unable to recover it. 00:31:20.340 [2024-07-24 22:30:15.213265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.213855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.213893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.214474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.214988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.215026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.215588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.216130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.216169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.216758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.217296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.217336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.217814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.218307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.218346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.218958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.219534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.219574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.220156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.220725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.220764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.221332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.221862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.221900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.222508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.223082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.223122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.223720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.224264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.224303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.224899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.225494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.225534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.226113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.226686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.226723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.227286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.227870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.227909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.228479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.228953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.228997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.229510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.230035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.230087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.230680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.231232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.231272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.231867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.232459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.232498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.233086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.233659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.233699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.234293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.234877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.234917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.235509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.236080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.236120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.236719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.237239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.237278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.237883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.238466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.238505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.239110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.239659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.239698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.240274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.240872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.240910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.241497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.242027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.242080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.242632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.243223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.243262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.243826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.244321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.244360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.244933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.245357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.245396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.341 qpair failed and we were unable to recover it. 00:31:20.341 [2024-07-24 22:30:15.245938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.341 [2024-07-24 22:30:15.246545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.246584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.247177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.247728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.247767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.248267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.248770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.248809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.249370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.249890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.249928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.250539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.251111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.251150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.251735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.252276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.252316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.252876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.253283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.253325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.253807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.254303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.254321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.254837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.255377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.255416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.255977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.256577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.256617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.257212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.257781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.257819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.258402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.258952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.258990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.259573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.260147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.260185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.260752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.261255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.261294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.261888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.262460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.262508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.263068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.263587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.263626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.264194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.264745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.264785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.265288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.265758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.265799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.266309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.266853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.266892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.267480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.268022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.268081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.268591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.269113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.269167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.269652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.270204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.270244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.270846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.271428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.271467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.272067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.272611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.272649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.273218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.273650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.273696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.274270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.274787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.274826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.275421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.276010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.276074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.276592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.277078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.277112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.342 qpair failed and we were unable to recover it. 00:31:20.342 [2024-07-24 22:30:15.277599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.278158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.342 [2024-07-24 22:30:15.278176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.278742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.279249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.279289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.279715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.280269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.280308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.280883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.281388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.281428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.281947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.282426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.282466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.283051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.283604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.283642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.284260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.284880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.284926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.285535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.286139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.286179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.286806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.287355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.287372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.287756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.288261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.288279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.288733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.289240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.289259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.289707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.290175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.290196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.290602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.291156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.291196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.291687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.292128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.292146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.292608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.293134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.293154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.293540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.293917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.293935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.294407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.294829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.294874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.295358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.295969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.296007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.296486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.297196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.297237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.297967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.298424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.298464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.298983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.299470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.299511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.300035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.300506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.300528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.300981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.301466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.301496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.301943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.302395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.302428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.302975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.303434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.303452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.303876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.304376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.304391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.304894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.305256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.305272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.305752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.306224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.306239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.306687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.307116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.343 [2024-07-24 22:30:15.307131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.343 qpair failed and we were unable to recover it. 00:31:20.343 [2024-07-24 22:30:15.307554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.307968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.307982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.308385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.308879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.308894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.309391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.309869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.309889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.310429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.310863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.310881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.311385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.311865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.311882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.312266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.312775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.312792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.313308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.313672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.313690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.314121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.314550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.314566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.315082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.315576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.315594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.316112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.316542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.316559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.317000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.317435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.317453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.317934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.318597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.318615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.319152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.319605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.319623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.320138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.320597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.320614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.321070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.321553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.321570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.322088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.322510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.322527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.323020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.323466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.323483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.323926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.324424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.324443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.324882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.325381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.325399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.325908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.326347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.326365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.326851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.327377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.327395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.327834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.328344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.328363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.328867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.329391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.329408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.329848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.330254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.330271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.344 qpair failed and we were unable to recover it. 00:31:20.344 [2024-07-24 22:30:15.330690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.344 [2024-07-24 22:30:15.331200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.331218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.331686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.332217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.332235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.332686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.333163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.333181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.333573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.334052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.334073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.334515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.335011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.335028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.335512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.335955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.335972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.336456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.336949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.336966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.337507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.337988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.338005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.338463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.338963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.338981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.339457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.339965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.339983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.340526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.341035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.341093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.341605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.342077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.342117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.342690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.343154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.343193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.343819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.344236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.344275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.344848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.345282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.345320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.345830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.346393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.346434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.346985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.347528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.347568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.348148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.348648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.348687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.349173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.349651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.349690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.350179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.350650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.350688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.351277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.351772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.351810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.352408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.352993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.353011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.353507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.354071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.354110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.354662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.355251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.355290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.355840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.356361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.356403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.356976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.357417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.357457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.358063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.358630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.345 [2024-07-24 22:30:15.358668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.345 qpair failed and we were unable to recover it. 00:31:20.345 [2024-07-24 22:30:15.359231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.359660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.359698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.360250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.360671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.360690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.361195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.361688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.361727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.362198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.362615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.362653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.363217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.363702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.363740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.364290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.364763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.364782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.365290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.365811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.365850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.366284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.366834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.366852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.367288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.367726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.367764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.368367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.368917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.368955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.369524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.370037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.370087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.370575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.371069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.371087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.371526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.371952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.371970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.372477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.372909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.372927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.373426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.373908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.373926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.374456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.374994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.375013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.375475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.375850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.375868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.376384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.376865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.376883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.377400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.377918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.377938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.378484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.378946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.378964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.379471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.379895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.379934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.380434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.381004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.381070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.381633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.382192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.382211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.382658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.383198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.383216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.383731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.384220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.384260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.384746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.385268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.385308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.385812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.386314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.386353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.386949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.387412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.387452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.346 qpair failed and we were unable to recover it. 00:31:20.346 [2024-07-24 22:30:15.387961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.388432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.346 [2024-07-24 22:30:15.388474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.388920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.389344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.389383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.389867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.390412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.390451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.391069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.391643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.391681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.392195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.392667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.392706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.393244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.393748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.393787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.394353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.394861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.394900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.395492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.396076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.396116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.396639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.397196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.397235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.397791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.398271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.398315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.398771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.399272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.399312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.399822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.400364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.400404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.401027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.401555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.401595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.402121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.402619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.402657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.403216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.403740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.403779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.404323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.404760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.404798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.405288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.405758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.405777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.406284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.406790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.406828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.407546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.408014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.408072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.408557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.409039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.409093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.409512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.410067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.410107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.410432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.410973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.411012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.411516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.412018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.412072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.412557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.412964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.413003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.413506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.413929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.413969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.414473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.414879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.414897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.415333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.415815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.415855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.416345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.416789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.416828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.417306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.417764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.347 [2024-07-24 22:30:15.417802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.347 qpair failed and we were unable to recover it. 00:31:20.347 [2024-07-24 22:30:15.418284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.418795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.418835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.419321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.419734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.419772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.420310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.420719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.420757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.421255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.421668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.421705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.422134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.422581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.422620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.423108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.423578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.423616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.424113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.424513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.424532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.424913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.425349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.425389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.425897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.426318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.426358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.426839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.427360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.427407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.427861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.428296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.428318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.428808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.429034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.429087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.429586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.429984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.430021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.430516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.431144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.431185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.431656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.432076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.432115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.432545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.433007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.433059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.433617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.434030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.434058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.434517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.434919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.434957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.435377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.435949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.435966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.436479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.436860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.436878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.437315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.437688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.437710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.438220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.438702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.438740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.439235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.439656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.439694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.440097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.440557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.440596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.441180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.441628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.441667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.442143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.442593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.442632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.443186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.443603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.443642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.444172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.444625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.444663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.348 qpair failed and we were unable to recover it. 00:31:20.348 [2024-07-24 22:30:15.445221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.348 [2024-07-24 22:30:15.445772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.445811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.446402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.446918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.446957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.447358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.447824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.447869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.448393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.448917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.448957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.449566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.450165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.450203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.450708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.451220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.451238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.451636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.452152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.452192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.452629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.453108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.453147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.453583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.454082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.454122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.454629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.455106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.455146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.455636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.456105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.456144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.456577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.457070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.457110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.457608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.458119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.458167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.458647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.459117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.459156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.459674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.460222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.460261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.460779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.461273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.461313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.461849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.462338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.462355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.462802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.463271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.463289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.463756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.464235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.464274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.464902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.465378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.465397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.465913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.466465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.349 [2024-07-24 22:30:15.466504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.349 qpair failed and we were unable to recover it. 00:31:20.349 [2024-07-24 22:30:15.467040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.467683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.467701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.468396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.468782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.468800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.469245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.469626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.469666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.470153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.470574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.470611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.471167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.471593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.471631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.472186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.472890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.472931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.473473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.473887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.473926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.474357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.474794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.474833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.475309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.475828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.475867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.476409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.476885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.476924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.477422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.477859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.477897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.478519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.479098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.479138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.479637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.480207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.480246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.480755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.481290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.481329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.481871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.482372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.482412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.482981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.483474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.483513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.484011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.484530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.484575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.618 qpair failed and we were unable to recover it. 00:31:20.618 [2024-07-24 22:30:15.485092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.618 [2024-07-24 22:30:15.485551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.485569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.486110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.486585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.486623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.487177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.487592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.487629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.488151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.488559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.488597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.489179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.489655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.489693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.490193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.490658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.490698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.491257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.491729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.491767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.492206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.492614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.492653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.493167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.493645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.493683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.494186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.494766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.494805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.495364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.495936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.495974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.496499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.497038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.497095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.497530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.497999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.498038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.498517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.499029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.499082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.499573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.500166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.500206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.500680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.501112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.501152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.501629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.502202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.502241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.502738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.503266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.503306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.503804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.504496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.504536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.505115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.505601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.505640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.506376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.506848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.506887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.507512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.507951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.507989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.508442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.508870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.508908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.619 qpair failed and we were unable to recover it. 00:31:20.619 [2024-07-24 22:30:15.509405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.619 [2024-07-24 22:30:15.509808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.509847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.510391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.510871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.510910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.511477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.511878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.511895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.512399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.512769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.512801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.513206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.513655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.513675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.514156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.514583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.514598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.515078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.515515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.515531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.515919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.516365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.516381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.516813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.517255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.517270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.517635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.518085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.518102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.518469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.518887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.518903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.519351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.519754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.519769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.520069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.520504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.520520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.520900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.521428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.521443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.521816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.522239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.522256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.522684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.523151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.523167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.523607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.524334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.524350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.524827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.525320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.525335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.525717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.526090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.526105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.526534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.527021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.527036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.527528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.527955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.527969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.528434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.528845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.528860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.529359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.529789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.529803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.530298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.530809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.530823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.531276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.531748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.531762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.620 [2024-07-24 22:30:15.532233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.532662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.620 [2024-07-24 22:30:15.532677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.620 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.533185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.533660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.533675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.534095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.534541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.534555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.534992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.535442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.535457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.535878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.536359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.536374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.536859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.537281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.537296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.537651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.538132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.538146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.538575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.538956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.538971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.539347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.539767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.539781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.540292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.540711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.540725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.541222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.541682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.541696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.542114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.542628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.542642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.543219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.543674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.543687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.544139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.544499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.544513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.544992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.545710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.545726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.546166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.546639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.546654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.547096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.547567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.547581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.548014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.548533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.548548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.549110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.549482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.549496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.549995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.550564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.550578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.551008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.551491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.551505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.551928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.552347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.552361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.552814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.553244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.553260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.553744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.554104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.554119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.554537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.554943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.554957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.555430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.555932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.555946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.556437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.556863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.556877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.621 qpair failed and we were unable to recover it. 00:31:20.621 [2024-07-24 22:30:15.557105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.621 [2024-07-24 22:30:15.557469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.557483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.557853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.558273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.558288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.558684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.559106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.559121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.559599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.560027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.560050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.560469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.560950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.560965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.561395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.561914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.561928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.562353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.562621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.562635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.563030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.563472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.563487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.563985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.564413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.564428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.564831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.565232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.565247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.565683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.565953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.565966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.566391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.566791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.566805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.567242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.567736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.567750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.568168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.568658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.568672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.568896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.569269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.569284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.569742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.570192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.570207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.570561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.571038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.571060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.571530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.571998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.572013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.572428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.572898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.572912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.573330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.573820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.573835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.574327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.574733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.574748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.575102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.575520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.575534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.575975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.576374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.576390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.576882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.577372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.577387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.577822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.578258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.578273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.578747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.579213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.579228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.579639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.580067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.580083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.580551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.581059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.581074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.622 qpair failed and we were unable to recover it. 00:31:20.622 [2024-07-24 22:30:15.581501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.622 [2024-07-24 22:30:15.581914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.581928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.582264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.582697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.582712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.583206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.583615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.583632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.584126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.584535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.584549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.584905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.585407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.585422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.585897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.586335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.586349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.586816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.587330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.587344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.587812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.588305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.588320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.588763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.589164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.589179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.589593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.590096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.590111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.590565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.591055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.591070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.591558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.592054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.592068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.592417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.592883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.592900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.593119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.593613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.593627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.594115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.594521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.594535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.594956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.595474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.595505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.595950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.596461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.596492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.596957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.597386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.597416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.597848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.598295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.598326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.598801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.599189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.599203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.599639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.600018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.600073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.600549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.601084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.601115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.601549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.601977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.602012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.623 [2024-07-24 22:30:15.602573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.603068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.623 [2024-07-24 22:30:15.603100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.623 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.603553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.603940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.603972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.604384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.604880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.604910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.605373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.605781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.605811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.606196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.606584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.606615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.607067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.607463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.607493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.607940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.608446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.608478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.608929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.609296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.609327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.609798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.610239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.610270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.610741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.611194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.611231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.611608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.612213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.612243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.612609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.612984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.613014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.613396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.613922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.613952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.614397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.614784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.614814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.615212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.615576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.615606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.616037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.616498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.616528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.617059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.617499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.617529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.617964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.618405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.618435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.618985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.619478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.619509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.619960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.620470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.620501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.620983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.621405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.621435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.621863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.622230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.622244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.622735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.623176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.623207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.623597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.624037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.624079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.624524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.624960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.624989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.625434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.626761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.626789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.627244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.627608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.627638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.628115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.628510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.628540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.629063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.629505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.624 [2024-07-24 22:30:15.629535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.624 qpair failed and we were unable to recover it. 00:31:20.624 [2024-07-24 22:30:15.629926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.630436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.630466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.630863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.631224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.631255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.631654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.632096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.632126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.632501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.632898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.632928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.633318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.633705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.633735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.634199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.634649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.634679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.635126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.635617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.635647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.636142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.636481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.636511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.636956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.637448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.637479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.637848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.638339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.638353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.638705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.639238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.639268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.639774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.640198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.640229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.640675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.641060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.641091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.641619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.642072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.642103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.642597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.642996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.643025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.643337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.643687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.643716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.644153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.644527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.644556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.644938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.645378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.645409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.645849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.646300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.646331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.646715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.647090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.647121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.647502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.647889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.647918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.648375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.648765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.648794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.649288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.649746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.649760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.650230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.650588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.650617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.651079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.651518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.651547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.651909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.652305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.652335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.652781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.653232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.653262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.653636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.654121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.654151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.625 qpair failed and we were unable to recover it. 00:31:20.625 [2024-07-24 22:30:15.654399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.625 [2024-07-24 22:30:15.654826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.654855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.655292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.655740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.655768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.656256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.656703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.656732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.657242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.657694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.657723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.658084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.658443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.658472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.658930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.659472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.659503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.659999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.660434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.660466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.660853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.661310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.661340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.661714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.662160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.662190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.662634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.662999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.663028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.663416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.663850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.663879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.664264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.664712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.664742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.666439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.666875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.666909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.667208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.667695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.667724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.668175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.668691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.668721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.669221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.669674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.669704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.670225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.670629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.670642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.671119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.671337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.671351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.671711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.672050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.672064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.672481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.672856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.672885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.673379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.673766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.673796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.674377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.674780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.674794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.675251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.675746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.675776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.676264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.676696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.676726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.677174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.677560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.677589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.678082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.679637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.679664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.680245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.680738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.680752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.681219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.681575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.681604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.626 qpair failed and we were unable to recover it. 00:31:20.626 [2024-07-24 22:30:15.681977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.626 [2024-07-24 22:30:15.682403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.682433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.682818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.683261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.683295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.683793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.684258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.684289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.684584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.685069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.685100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.685470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.685899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.685928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.686373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.686879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.686909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.687362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.687742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.687772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.688226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.688585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.688614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.689062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.689427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.689456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.689889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.690259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.690273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.690616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.690959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.690989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.691467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.691893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.691923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.692308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.692659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.692689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.693065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.693421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.693450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.693959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.694450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.694481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.694931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.695315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.695346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.695790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.696235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.696265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.696707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.697137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.697167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.697551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.697969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.697998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.698440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.698811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.698840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.699231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.699610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.699640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.700135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.700652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.700682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.701068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.701438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.701468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.701835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.702216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.702230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.702644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.703067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.703098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.703350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.703711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.703740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.627 qpair failed and we were unable to recover it. 00:31:20.627 [2024-07-24 22:30:15.704127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.704483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.627 [2024-07-24 22:30:15.704513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.705068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.705458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.705488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.705977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.706497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.706527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.706898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.707277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.707308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.707741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.708129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.708160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.708603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.709039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.709080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.709515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.709886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.709916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.710336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.710852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.710881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.711397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.711824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.711853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.712226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.712748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.712778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.713150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.713586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.713616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.714074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.714437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.714467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.714902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.715343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.715374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.715743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.716118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.716150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.716618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.716986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.717016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.717467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.717891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.717921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.718278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.718677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.718706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.719154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.719638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.719667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.720128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.720559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.720588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.720981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.721369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.721405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.721783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.722213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.722244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.722636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.723084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.723115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.723505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.723920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.723950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.724478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.724841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.724870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.725298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.725723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.725736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.726205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.726545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.726574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.628 qpair failed and we were unable to recover it. 00:31:20.628 [2024-07-24 22:30:15.727000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.628 [2024-07-24 22:30:15.727556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.727588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.727963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.728340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.728370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.728751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.729192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.729206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.729611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.729950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.729985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.730386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.730802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.730832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.731226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.731601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.731630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.732282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.732641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.732669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.733161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.733614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.733643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.734083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.734512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.734541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.734977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.735418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.735448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.735821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.736097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.736127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.736548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.736791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.736821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.737255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.737681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.737711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.738039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.738399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.738415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.738815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.739282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.739313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.739681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.740057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.740089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.740547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.740979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.741010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.741413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.741855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.741884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.629 [2024-07-24 22:30:15.742313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.742697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.629 [2024-07-24 22:30:15.742726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.629 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.743197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.743634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.743662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.744069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.744460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.744489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.744853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.745300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.745314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.745714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.746116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.746130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.746472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.746937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.746953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.747377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.747802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.747831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.748252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.748583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.748596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.749314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.749755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.749783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.750218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.750571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.750584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.750976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.751472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.751486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.919 qpair failed and we were unable to recover it. 00:31:20.919 [2024-07-24 22:30:15.751736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.919 [2024-07-24 22:30:15.752137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.752150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.752482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.752896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.752925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.753308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.753962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.753992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.754447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.754941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.754971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.755406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.755773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.755786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.756196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.756548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.756561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.756896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.757262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.757275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.757634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.758037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.758056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.758512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.758832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.758845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.759258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.759667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.759680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.760159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.760559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.760572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.760910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.761302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.761316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.761745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.762089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.762103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.762638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.763004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.763017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.763349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.763692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.763705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.764222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.764615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.764645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.765030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.765487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.765516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.765948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.766400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.766429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.766838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.767197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.767226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.767667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.768176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.768219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.768653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.769083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.769114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.769580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.770002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.770038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.770506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.770872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.770901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.771324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.771828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.771858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.772346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.772856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.772886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.773273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.773761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.773790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.774215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.774585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.774614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.920 [2024-07-24 22:30:15.775061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.775488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.920 [2024-07-24 22:30:15.775517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.920 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.775933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.776204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.776235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.776669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.777077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.777109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.777477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.777863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.777893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.778320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.778717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.778748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.779202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.779684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.779713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.780096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.780465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.780494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.780875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.781365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.781396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.781889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.782378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.782413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.782828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.783284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.783298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.783662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.784106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.784136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.785195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.785632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.785647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.786017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.786414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.786444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.786890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.787309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.787340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.787773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.788195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.788226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.788734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.789170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.789201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.789855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.790276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.790307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.790817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.791376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.791406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.792302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.792806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.792821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.793244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.793698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.793729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.794156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.794595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.794623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.795067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.795509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.795538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.795967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.796418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.796448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.796880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.797267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.797281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.797643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.797980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.798009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.798512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.798886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.798916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.799349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.799716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.799730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.800067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.800471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.800485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.800937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.801379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.921 [2024-07-24 22:30:15.801411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.921 qpair failed and we were unable to recover it. 00:31:20.921 [2024-07-24 22:30:15.801853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.802294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.802324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.802755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.803101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.803114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.803574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.804062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.804092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.804473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.804870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.804883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.805228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.805572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.805585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.805991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.806333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.806347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.806686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.807088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.807102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.807495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.807870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.807899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.808280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.808765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.808794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.809233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.809600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.809629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.810065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.810500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.810529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.810989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.811389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.811403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.811820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.812225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.812239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.812573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.813026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.813039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.813392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.813889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.813918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.814294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.814716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.814745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.815203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.815681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.815694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.816124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.816556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.816569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.817062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.817502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.817533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.818063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.818551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.818581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.819015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.819469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.819499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.819882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.820296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.820310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.820736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.821150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.821181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.821637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.822019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.822058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.822406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.822813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.822843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.823358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.823753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.823782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.824190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.824672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.824686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.922 qpair failed and we were unable to recover it. 00:31:20.922 [2024-07-24 22:30:15.825145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.922 [2024-07-24 22:30:15.825573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.825602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.826019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.826272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.826303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.826744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.827115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.827146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.827603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.828056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.828087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.828542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.828909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.828938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.829388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.829870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.829883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.830195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.830622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.830651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.831176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.831682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.831712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.832135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.832465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.832494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.832864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.833390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.833421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.833849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.834280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.834310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.834804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.835145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.835175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.835687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.836121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.836151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.836582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.837089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.837120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.837618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.838102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.838132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.838624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.839068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.839098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.839481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.839972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.839986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.840394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.840761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.840790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.841277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.841786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.841799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.842102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.842453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.842482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.842945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.843411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.843441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.843689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.844116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.844146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.844578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.845037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.845079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.845517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.846003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.846034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.846566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.847005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.847019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.847382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.847836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.923 [2024-07-24 22:30:15.847850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.923 qpair failed and we were unable to recover it. 00:31:20.923 [2024-07-24 22:30:15.848343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.848628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.848657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.849171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.849655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.849685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.850177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.850670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.850699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.851139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.851626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.851655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.852171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.852678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.852707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.853150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.853634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.853664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.854089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.854628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.854658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.855145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.855586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.855616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.855893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.856409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.856440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.856935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.857389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.857419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.857908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.858312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.858326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.858721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.859201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.859231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.859659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.859971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.860000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.860523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.860957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.860986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.861230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.861755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.861785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.862165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.862681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.862710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.862954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.863479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.863516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.863949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.864441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.864478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.864875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.865393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.865424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.865803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.866310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.866341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.866878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.867311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.867342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.924 qpair failed and we were unable to recover it. 00:31:20.924 [2024-07-24 22:30:15.867770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.868253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.924 [2024-07-24 22:30:15.868283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.868771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.869204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.869234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.869671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.870192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.870223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.870693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.871205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.871235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.871726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.872211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.872254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.872767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.873276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.873312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.873754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.874194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.874224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.874733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.875173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.875203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.875738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.876231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.876261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.876753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.877238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.877268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.877783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.878221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.878253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.878752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.879124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.879154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.879589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.880099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.880130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.880621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.881057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.881088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.881528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.881884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.881913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.882354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.882787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.882821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.883308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.883763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.883793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.884304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.884826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.884855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.885297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.885805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.885835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.886349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.886680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.886709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.887177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.887636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.887666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.888091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.888518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.888547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.889038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.889471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.889501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.890022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.890476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.890507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.891010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.891382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.891413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.925 qpair failed and we were unable to recover it. 00:31:20.925 [2024-07-24 22:30:15.891902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.925 [2024-07-24 22:30:15.892406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.892442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.892953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.893386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.893416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.893842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.894292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.894323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.894839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.895269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.895300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.895752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.896258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.896288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.896754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.897181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.897211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.897721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.898206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.898236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.898664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.899173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.899203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.899642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.900078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.900110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.900387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.900873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.900903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.901416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.901659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.901689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.902154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.902637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.902667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.903181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.903616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.903646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.904140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.904514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.904542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.905006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.905449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.905479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.905954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.906438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.906468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.906981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.907485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.907516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.908039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.908525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.908538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.908999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.909536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.909567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.910005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.910522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.910553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.910972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.911403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.911434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.911935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.912405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.912435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.912837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.913325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.913355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.913773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.914211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.914241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.914766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.915274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.915305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.915804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.916291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.916322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.916774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.917284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.917315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.917821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.918324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.926 [2024-07-24 22:30:15.918355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.926 qpair failed and we were unable to recover it. 00:31:20.926 [2024-07-24 22:30:15.918740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.919244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.919274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.919763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.919992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.920005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.920411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.920944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.920973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.921427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.921946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.921959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.922362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.922782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.922812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.923273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.923709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.923739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.924229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.924579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.924592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.925081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.925453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.925482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.925937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.926307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.926337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.926847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.927304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.927334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.927824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.928212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.928242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.928671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.929180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.929210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.929734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.930193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.930224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.930722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.931229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.931260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.931773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.932234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.932265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.932706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.933212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.933226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.933700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.934202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.934233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.934745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.935182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.935213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.935647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.936100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.936130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.936641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.937141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.937155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.937610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.938092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.938122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.938493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.938979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.939008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.939487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.939919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.939948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.940397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.940886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.940914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.941290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.941769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.941783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.927 qpair failed and we were unable to recover it. 00:31:20.927 [2024-07-24 22:30:15.942263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.942746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.927 [2024-07-24 22:30:15.942775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.943147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.943625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.943655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.944168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.944672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.944702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.945156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.945655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.945684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.946134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.946571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.946600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.947140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.947630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.947659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.948166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.948684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.948713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.949217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.949644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.949673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.950187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.950621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.950650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.951148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.951631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.951660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.952153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.952646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.952674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.953189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.953690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.953720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.954180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.954632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.954661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.954855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.955284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.955314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.955819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.956246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.956260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.956550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.956977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.957006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.957530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.957957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.957986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.958509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.959023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.959060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.959581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.960143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.960157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.960643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.961079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.961109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.961563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.961942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.961972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.962412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.963084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.963114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.963555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.964062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.964077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.964488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.964908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.964922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.965402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.965837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.965865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.966294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.966721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.966734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.967189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.967643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.967672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.968213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.968574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.928 [2024-07-24 22:30:15.968587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.928 qpair failed and we were unable to recover it. 00:31:20.928 [2024-07-24 22:30:15.969061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.969506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.969535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.970076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.970582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.970611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.971012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.971412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.971426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.971909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.972393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.972423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.972866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.973280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.973310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.973821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.974204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.974234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.974787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.975247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.975277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.975557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.975979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.976008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.976582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.976953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.976991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.977361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.977741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.977769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.978213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.978580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.978609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.979099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.979611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.979640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.980156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.980580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.980609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.981054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.981474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.981504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.982010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.982504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.982535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.982910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.983408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.983423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.983881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.984363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.984393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.984899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.985323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.985353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.985774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.986295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.986309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.986760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.987006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.987036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.929 [2024-07-24 22:30:15.987504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.987992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.929 [2024-07-24 22:30:15.988005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.929 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.988417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.988903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.988932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.989252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.989461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.989474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.989829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.990318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.990348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.990814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.991236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.991266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.991766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.992207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.992237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.992746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.993208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.993238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.993728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.994163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.994177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.994566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.995075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.995105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.995663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.996101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.996132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.996575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.997064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.997095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.997521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.997940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.997970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.998426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.998914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.998942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:15.999456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.999941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:15.999970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.000376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.000886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.000915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.001243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.001747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.001776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.002232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.002716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.002745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.003248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.003754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.003783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.004224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.004684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.004713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.005229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.005615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.005644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.006132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.006455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.006484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.006854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.007290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.007321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.007749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.008115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.008144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.008569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.009075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.009106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.009530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.009965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.009994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.010512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.010947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.010977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.011508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.012022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.012071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.012471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.012819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.012849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.930 qpair failed and we were unable to recover it. 00:31:20.930 [2024-07-24 22:30:16.013271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.930 [2024-07-24 22:30:16.013774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.013803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.014250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.014678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.014706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.015054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.015492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.015528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.015970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.016483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.016514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.016745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.017196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.017227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.017724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.018142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.018172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.018638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.019100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.019131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.019635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.020095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.020125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.020640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.021154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.021184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.021624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.022056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.022087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.022531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.023034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.023073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.023586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.024028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.024075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.024548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.025055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.025091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.025632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.026131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.026162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.026652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.027100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.027130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.027570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.028060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.028090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.028598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.029084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.029114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.029645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.030014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.030053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.030480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.030932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.030961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.031500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.032007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.032036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.032543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.032969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.032998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.033530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.034035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.034073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.034467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3744919 Killed "${NVMF_APP[@]}" "$@" 00:31:20.931 [2024-07-24 22:30:16.034951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.034965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 22:30:16 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:31:20.931 22:30:16 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:31:20.931 22:30:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:20.931 [2024-07-24 22:30:16.035441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 22:30:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:20.931 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:20.931 [2024-07-24 22:30:16.035857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.035879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.036285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.036692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.036705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.037139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.037554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.037567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.931 qpair failed and we were unable to recover it. 00:31:20.931 [2024-07-24 22:30:16.037975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.038316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.931 [2024-07-24 22:30:16.038329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.932 qpair failed and we were unable to recover it. 00:31:20.932 [2024-07-24 22:30:16.038815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.039522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.039537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.932 qpair failed and we were unable to recover it. 00:31:20.932 22:30:16 -- nvmf/common.sh@469 -- # nvmfpid=3745784 00:31:20.932 [2024-07-24 22:30:16.040017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 22:30:16 -- nvmf/common.sh@470 -- # waitforlisten 3745784 00:31:20.932 22:30:16 -- common/autotest_common.sh@819 -- # '[' -z 3745784 ']' 00:31:20.932 22:30:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.932 [2024-07-24 22:30:16.040367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.040381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.932 qpair failed and we were unable to recover it. 00:31:20.932 22:30:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:20.932 22:30:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.932 22:30:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:20.932 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:20.932 [2024-07-24 22:30:16.040839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 22:30:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:31:20.932 [2024-07-24 22:30:16.041293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.041310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.932 qpair failed and we were unable to recover it. 00:31:20.932 [2024-07-24 22:30:16.041669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.042094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.042108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.932 qpair failed and we were unable to recover it. 00:31:20.932 [2024-07-24 22:30:16.042533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.042876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.042890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.932 qpair failed and we were unable to recover it. 00:31:20.932 [2024-07-24 22:30:16.043258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.043654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.043668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.932 qpair failed and we were unable to recover it. 00:31:20.932 [2024-07-24 22:30:16.044067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.044469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.044482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.932 qpair failed and we were unable to recover it. 00:31:20.932 [2024-07-24 22:30:16.044913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.045314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.932 [2024-07-24 22:30:16.045328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:20.932 qpair failed and we were unable to recover it. 00:31:21.198 [2024-07-24 22:30:16.045782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.198 [2024-07-24 22:30:16.046260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.198 [2024-07-24 22:30:16.046273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.198 qpair failed and we were unable to recover it. 00:31:21.198 [2024-07-24 22:30:16.046615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.198 [2024-07-24 22:30:16.047024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.198 [2024-07-24 22:30:16.047038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.198 qpair failed and we were unable to recover it. 00:31:21.198 [2024-07-24 22:30:16.047521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.198 [2024-07-24 22:30:16.047731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.198 [2024-07-24 22:30:16.047744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.198 qpair failed and we were unable to recover it. 00:31:21.198 [2024-07-24 22:30:16.048153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.048568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.048581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.048968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.049439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.049455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.049672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.050055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.050068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.050469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.050813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.050827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.051181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.051517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.051530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.051987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.052433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.052446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.052852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.053248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.053261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.053713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.054126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.054139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.054536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.055034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.055053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.055528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.055855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.055869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.056337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.056772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.056785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.057178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.057487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.057503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.057962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.058399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.058413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.058825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.059183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.059196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.059565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.059973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.059986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.060325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.060712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.060726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.061205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.061682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.061695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.062086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.062431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.062444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.062894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.063301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.063315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.063734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.064211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.064227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.064647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.065002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.065016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.065484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.065700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.065718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.066129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.066619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.066632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.067033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.067493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.067508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.199 qpair failed and we were unable to recover it. 00:31:21.199 [2024-07-24 22:30:16.067962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.199 [2024-07-24 22:30:16.068400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.068415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.068755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.069152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.069166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.069592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.069990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.070003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.070428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.070923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.070937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.071391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.071788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.071802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.072277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.072907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.072921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.073407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.073819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.073833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.074174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.074649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.074663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.075056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.075475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.075489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.075895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.076392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.076406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.076887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.077281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.077295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.077693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.078177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.078191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.078501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.078964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.078977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.079425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.079900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.079914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.080326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.080736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.080750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.081228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.081620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.081634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.082112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.082518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.082532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.082874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.083353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.083367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.083723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.084200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.084215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.084691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.085094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.085108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.085506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.085908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.085922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.086325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.086718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.086731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.087189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.087580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.087594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.088070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.088189] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:31:21.200 [2024-07-24 22:30:16.088229] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:21.200 [2024-07-24 22:30:16.088493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.088506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.088959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.089413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.089428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.089902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.090355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.090369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.200 [2024-07-24 22:30:16.090799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.091224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.200 [2024-07-24 22:30:16.091238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.200 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.091636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.091875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.091888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.092299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.092486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.092500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.092828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.093236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.093250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.093706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.094158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.094172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.094579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.094931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.094944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.095419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.095752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.095765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.096232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.096629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.096642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.096986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.097482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.097496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.097706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.098132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.098147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.098633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.098872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.098885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.099366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.099850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.099863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.100277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.100751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.100765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.101187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.101673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.101687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.102167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.102572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.102586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.103063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.103515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.103528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.103984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.104439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.104453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.104871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.105344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.105358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.105840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.106166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.106181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.106487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.106951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.106964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.107446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.107771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.107785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.108199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.108613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.108626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.109105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.109526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.109540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.109926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.110346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.110360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.201 [2024-07-24 22:30:16.110681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.111133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.201 [2024-07-24 22:30:16.111147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.201 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.111653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.112051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.112065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.112467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.112919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.112932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.113388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.113812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.113825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.114231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.114708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.114721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.202 [2024-07-24 22:30:16.115195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.115672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.115686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.116081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.116488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.116502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.116977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.117371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.117386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.117806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.118260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.118273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.118728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.119075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.119089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.119515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.119932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.119945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.120355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.120811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.120824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.121279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.121631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.121645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.122049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.122476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.122489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.122856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.123261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.123275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.123695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.124083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.124097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.124551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.125005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.125018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.125426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.125705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.125719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.126172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.126505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.126519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.126925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.127337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.127351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.127807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.128278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.128292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.128651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.129005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.129018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.129502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.129976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.129989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.130397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.130798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.130811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.131212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.131629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.131642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.132062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.132484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.132498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.132856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.133238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.202 [2024-07-24 22:30:16.133252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.202 qpair failed and we were unable to recover it. 00:31:21.202 [2024-07-24 22:30:16.133708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.134163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.134177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.134634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.135121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.135135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.135615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.136024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.136037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.136499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.136957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.136970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.137426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.137905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.137918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.138397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.138876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.138890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.139295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.139760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.139773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.140231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.140698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.140712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.140990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.141393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.141407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.141809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.142307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.142321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.142531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.142872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.142885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.143279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.143738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.143751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.144213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.144637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.144651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.145127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.145601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.145614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.146024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.146492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.146506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.146959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.147346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.147360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.147850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.148304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.148317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.148525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.148978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.148991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.149463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.149881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.149893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.150371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.150844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.150857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.151280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.151737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.151751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.152144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.152568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.152582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.203 qpair failed and we were unable to recover it. 00:31:21.203 [2024-07-24 22:30:16.153057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.203 [2024-07-24 22:30:16.153533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.153547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.153883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.154307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.154321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.154799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.155248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.155261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.155716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.156113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.156127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.156584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.157053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.157067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.157472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.157946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.157960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.158347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.158743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.158757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.159217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.159726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.159740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.160149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.160565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.160578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.161001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.161475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.161489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.161967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.162315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.162328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.162733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.163185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.163199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.163377] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:21.204 [2024-07-24 22:30:16.163599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.164101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.164116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.164519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.164909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.164923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.165342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.165818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.165833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.166293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.166770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.166784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.167262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.167718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.167732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.168152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.168571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.168585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.168975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.169410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.169424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.169852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.170307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.170323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.170804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.171161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.171176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.204 qpair failed and we were unable to recover it. 00:31:21.204 [2024-07-24 22:30:16.171590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.171997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.204 [2024-07-24 22:30:16.172012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.172494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.172830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.172845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.173259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.173683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.173697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.174112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.174567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.174581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.175060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.175466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.175480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.175957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.176344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.176358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.176815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.177147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.177161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.177646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.178052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.178066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.178469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.178873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.178887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.179364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.179768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.179782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.180235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.180621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.180635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.180977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.181393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.181407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.181891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.182343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.182360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.182816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.183228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.183246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.183646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.184067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.184083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.184572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.184971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.184987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.185229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.185685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.185700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.185996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.186451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.186466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.186977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.187391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.187406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.187815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.188213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.188228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.188660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.189125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.189140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.189543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.189961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.189975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.190398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.190751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.190765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.191188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.191586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.191600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.192007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.192397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.192411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.192827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.193278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.193292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.193779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.194182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.194196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.205 qpair failed and we were unable to recover it. 00:31:21.205 [2024-07-24 22:30:16.194674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.205 [2024-07-24 22:30:16.195133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.195147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.195641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.196119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.196133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.196618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.197070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.197085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.197513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.197943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.197956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.198361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.198767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.198781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.199172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.199600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.199614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.200015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.200417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.200431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.200785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.201279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.201293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.201791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.202271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.202285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.202702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.203130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.203146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.203285] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:21.206 [2024-07-24 22:30:16.203395] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:21.206 [2024-07-24 22:30:16.203404] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:21.206 [2024-07-24 22:30:16.203410] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:21.206 [2024-07-24 22:30:16.203504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.203462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:31:21.206 [2024-07-24 22:30:16.203567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:31:21.206 [2024-07-24 22:30:16.203574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:31:21.206 [2024-07-24 22:30:16.203550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:31:21.206 [2024-07-24 22:30:16.203924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.203938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.204267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.204730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.204745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.209057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.209621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.209640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.210128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.210630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.210647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.211068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.211523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.211537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.211967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.212313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.212329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.212807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.213265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.213280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.213534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.213938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.213953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.214437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.214840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.214862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.215318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.215801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.215816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.216249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.216657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.216672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.217087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.217567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.206 [2024-07-24 22:30:16.217581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.206 qpair failed and we were unable to recover it. 00:31:21.206 [2024-07-24 22:30:16.218066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.218465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.218480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.218911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.219363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.219378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.219848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.220235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.220250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.220684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.221077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.221093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.221547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.221969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.221984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.222319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.222751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.222766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.223192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.223596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.223615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.224007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.224484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.224500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.224979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.225330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.225348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.225829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.226309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.226327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.226812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.227239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.227257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.227738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.228210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.228228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.228734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.228892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.228905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.229311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.229701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.229716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.230185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.230678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.230692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.231210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.231617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.231632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.232103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.232558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.232573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.232986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.233409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.233425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.233881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.234335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.234350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.234835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.235237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.235252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.235728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.236195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.236210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.236563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.237037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.237059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.237456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.237930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.237946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.238353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.238686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.238700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.239178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.239634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.239647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.240052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.240530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.240543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.241018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.241420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.241434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.207 qpair failed and we were unable to recover it. 00:31:21.207 [2024-07-24 22:30:16.241917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.207 [2024-07-24 22:30:16.242391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.242404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.242826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.243182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.243196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.243506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.243929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.243943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.244407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.244874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.244888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.245290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.245711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.245725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.246130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.246604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.246618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.247098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.247438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.247452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.247856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.248330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.248345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.248803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.249219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.249234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.249651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.250040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.250072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.250572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.250986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.251000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.251478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.251827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.251841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.252297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.252777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.252791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.253284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.253706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.253720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.254198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.254677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.254692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.255028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.255440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.255455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.255934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.256323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.256341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.256696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.257082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.257096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.257529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.257882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.257895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.258117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.258504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.258518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.258858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.259277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.259291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.259770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.260186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.260199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.260605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.261057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.261070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.261525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.261978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.261991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.208 [2024-07-24 22:30:16.262469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.262945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.208 [2024-07-24 22:30:16.262959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.208 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.263359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.263766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.263779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.264255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.264680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.264693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.265145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.265621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.265635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.266035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.266489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.266502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.266982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.267332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.267345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.267748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.268221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.268238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.268693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.269093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.269106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.269455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.269799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.269812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.270223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.270570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.270583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.271068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.271273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.271287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.271672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.272144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.272157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.272564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.272952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.272964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.273441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.273862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.273874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.274295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.274680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.274693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.275174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.275518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.275532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.276022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.276497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.276511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.276904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.277357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.277370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.277828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.278303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.278318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.278718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.279192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.279205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.279608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.279815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.279827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.280338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.280723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.280736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.209 [2024-07-24 22:30:16.281193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.281665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.209 [2024-07-24 22:30:16.281678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.209 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.282159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.282633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.282646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.283063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.283465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.283479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.283929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.284422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.284438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.284920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.285438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.285452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.285885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.286133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.286147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.286420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.286824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.286837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.287315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.287759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.287772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.288179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.288676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.288690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.289145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.289639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.289652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.290058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.290461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.290473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.290948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.291352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.291366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.291843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.292254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.292267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.292694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.293081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.293094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.293547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.294053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.294067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.294534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.294994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.295011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.295474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.295868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.295881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.296206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.296595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.296608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.296951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.297339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.297353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.297754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.298165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.298178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.298520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.298937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.298950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.299314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.299707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.299720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.300127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.300554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.300568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.300964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.301377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.301390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.301803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.302264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.302277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.302744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.303148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.303161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.303637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.304117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.304131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.210 qpair failed and we were unable to recover it. 00:31:21.210 [2024-07-24 22:30:16.304482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.304892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.210 [2024-07-24 22:30:16.304905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.305306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.305742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.305755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.306158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.306606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.306619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.307016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.307444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.307458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.307849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.308236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.308250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.308735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.309188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.309202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.309364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.309865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.309878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.310265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.310499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.310512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.310871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.311205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.311219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.311637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.312126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.312140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.312620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.313095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.313109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.313507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.313847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.313860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.314256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.314710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.314722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.315197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.315652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.315664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.316005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.316492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.316506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.316906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.317201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.317215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.317613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.318026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.318039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.318408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.318892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.318905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.319254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.319668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.319681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.320090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.320506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.320519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.320939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.321278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.321291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.321693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.322026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.322039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.322543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.322998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.323011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.323422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.323758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.323770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.323983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.324371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.211 [2024-07-24 22:30:16.324384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.211 qpair failed and we were unable to recover it. 00:31:21.211 [2024-07-24 22:30:16.324792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.212 [2024-07-24 22:30:16.325197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.212 [2024-07-24 22:30:16.325211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.212 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.325612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.325996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.326009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.326408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.326826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.326839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.327186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.327670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.327683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.328130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.328533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.328546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.328962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.329331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.329345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.329739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.330086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.330100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.330529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.330870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.330884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.331321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.331749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.331762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.332174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.332560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.332573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.333028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.333415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.333428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.333839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.334293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.334307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.334799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.335198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.335212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.335620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.336068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.336085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.477 [2024-07-24 22:30:16.336560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.336901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.477 [2024-07-24 22:30:16.336914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.477 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.337262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.337665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.337678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.338076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.338478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.338490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.338975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.339437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.339451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.339930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.340261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.340275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.340679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.341029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.341046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.341447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.341846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.341859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.342252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.342724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.342737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.343141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.343551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.343564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.343973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.344430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.344446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.344839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.345240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.345254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.345664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.346136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.346149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.346552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.346970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.346983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.347437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.347893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.347906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.348386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.348785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.348798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.349187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.349610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.349623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.350058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.350534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.350547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.350944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.351135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.351148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.351646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.352114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.352128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.352539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.352956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.352971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.353308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.353696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.353709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.354192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.354687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.354699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.355101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.355558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.355571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.356024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.356421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.356435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.356837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.357254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.357268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.357607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.357954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.357966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.478 qpair failed and we were unable to recover it. 00:31:21.478 [2024-07-24 22:30:16.358448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.478 [2024-07-24 22:30:16.358901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.358913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.359331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.359782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.359794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.360271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.360673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.360686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.361145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.361568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.361585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.361974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.362307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.362320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.362747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.363201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.363215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.363482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.363923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.363936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.364392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.364790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.364803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.365279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.365490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.365503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.365972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.366376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.366389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.366796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.367146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.367159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.367634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.368089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.368103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.368579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.368975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.368987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.369388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.369873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.369886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.370363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.370784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.370797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.371276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.371750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.371763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.372218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.372915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.372930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.373410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.373800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.373813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.374243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.374595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.374608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.375030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.375512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.479 [2024-07-24 22:30:16.375525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.479 qpair failed and we were unable to recover it. 00:31:21.479 [2024-07-24 22:30:16.375937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.376391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.376404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.376807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.377260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.377273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.377760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.378083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.378096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.378434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.378909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.378922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.379405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.379795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.379808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.380287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.380760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.380773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.381179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.381566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.381579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.382063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.382473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.382486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.382889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.383229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.383242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.383648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.384148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.384162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.384627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.385031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.385048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.385525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.386002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.386015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.386404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.386855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.386868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.387322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.387796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.387809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.388218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.388667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.388680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.389150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.389638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.389651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.390120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.390574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.390587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.391038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.391525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.391538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.392012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.392409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.392430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.392885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.393243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.480 [2024-07-24 22:30:16.393256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.480 qpair failed and we were unable to recover it. 00:31:21.480 [2024-07-24 22:30:16.393643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.394097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.394110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.394561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.394974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.394987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.395390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.395773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.395786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.396273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.396578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.396590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.396996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.397410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.397423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.397760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.398096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.398110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.398533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.398914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.398927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.399413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.399751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.399764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.400171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.400509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.400522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.400869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.401326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.401340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.401744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.402225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.402239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.402644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.403096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.403110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.403530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.403852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.403864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.404284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.404766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.404779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.405258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.405587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.405600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.406002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.406495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.406508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.406914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.407391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.407404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.407875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.408330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.408343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.408794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.409248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.409262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.409670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.410146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.481 [2024-07-24 22:30:16.410160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.481 qpair failed and we were unable to recover it. 00:31:21.481 [2024-07-24 22:30:16.410610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.410963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.410976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.411430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.411879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.411892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.412367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.412766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.412779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.413234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.413637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.413650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.414135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.414529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.414542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.415018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.415476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.415489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.415910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.416362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.416375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.416585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.417012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.417025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.417518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.417920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.417933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.418334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.418805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.418817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.419292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.419745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.419758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.420227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.420624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.420636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.421065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.421519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.421532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.421931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.422327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.422341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.422819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.423233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.423247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.423650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.424124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.424137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.424616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.424953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.424966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.425369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.425752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.425764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.426187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.426617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.426630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.427107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.427408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.482 [2024-07-24 22:30:16.427421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.482 qpair failed and we were unable to recover it. 00:31:21.482 [2024-07-24 22:30:16.427890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.428278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.428292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.428707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.429163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.429176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.429632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.430084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.430098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.430549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.431001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.431014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.431447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.431848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.431861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.432339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.432788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.432801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.433281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.433710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.433723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.434175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.434596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.434609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.435012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.435412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.435425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.435811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.436219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.436232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.436642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.436986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.436999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.437431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.437884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.437897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.438389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.438840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.438853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.439312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.439783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.483 [2024-07-24 22:30:16.439796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.483 qpair failed and we were unable to recover it. 00:31:21.483 [2024-07-24 22:30:16.440271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.440702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.440715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.441122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.441510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.441523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.442001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.442472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.442486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.442989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.443439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.443453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.443904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.444383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.444396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.444737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.445189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.445202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.445695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.446148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.446161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.446587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.446937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.446950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.447353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.447830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.447843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.448259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.448663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.448677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.449080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.449476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.449489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.449949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.450427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.450441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.450739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.451191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.451204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.451655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.451998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.452010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.452252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.452727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.452740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.453214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.453670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.453683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.454082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.454505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.454518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.454917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.455396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.455410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.455887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.456319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.456332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.484 qpair failed and we were unable to recover it. 00:31:21.484 [2024-07-24 22:30:16.456736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.484 [2024-07-24 22:30:16.457214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.457227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.457705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.458187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.458201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.458602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.459053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.459066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.459466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.459942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.459954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.460358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.460834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.460847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.461193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.461583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.461596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.462019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.462383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.462396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.462819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.463248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.463261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.463657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.463999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.464012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.464466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.464769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.464781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.465185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.465590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.465603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.465999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.466494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.466510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.466986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.467500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.467513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.467901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.468301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.468315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.468734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.469224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.469237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.469628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.470101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.470115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.470522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.470995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.471008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.471484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.471909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.471922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.472379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.472776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.472789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.473268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.473669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.473683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.485 qpair failed and we were unable to recover it. 00:31:21.485 [2024-07-24 22:30:16.474162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.485 [2024-07-24 22:30:16.474639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.474653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.475059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.475329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.475345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.475820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.476217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.476231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.476636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.477033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.477051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.477415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.477868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.477882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.478339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.478840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.478854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.479207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.479550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.479564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.479978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.480188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.480203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.480612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.480997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.481011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.481490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.481965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.481978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.482330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.482753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.482766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.483256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.483594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.483610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.484047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.484507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.484520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.484973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.485431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.485444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.486163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.486622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.486635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.486983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.487320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.487334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.487732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.488204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.488217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.488621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.489013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.489026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.489512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.489926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.489939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.490340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.490675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.490688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.486 qpair failed and we were unable to recover it. 00:31:21.486 [2024-07-24 22:30:16.491102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.486 [2024-07-24 22:30:16.491505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.491518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.491973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.492375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.492392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.492735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.493174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.493188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.493643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.494065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.494079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.494486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.494878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.494892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.495304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.495773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.495787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.496215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.496631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.496645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.497053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.497527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.497540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.497972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.498408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.498422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.498724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.499184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.499198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.499606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.499805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.499819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.500294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.500707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.500720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.501165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.501568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.501582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.502035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.502465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.502478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.502828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.503163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.503177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.503656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.504067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.504081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.504558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.505008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.505021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.505505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.505905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.505918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.506416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.506869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.506882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.507339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.507695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.507708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.487 qpair failed and we were unable to recover it. 00:31:21.487 [2024-07-24 22:30:16.508119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.508523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.487 [2024-07-24 22:30:16.508536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.508989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.509441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.509455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.509933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.510346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.510360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.510759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.511068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.511082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.511543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.512022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.512035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.512495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.512698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.512711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.513115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.513266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.513279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.513761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.514187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.514201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.514539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.514882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.514895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.515236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.515657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.515670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.516095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.516581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.516593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.517072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.517369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.517382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.517839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.518228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.518242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.518725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.519107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.519121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.519544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.519931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.519944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.520338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.520759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.520772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.521180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.521583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.521596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.521992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.522455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.522469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.522953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.523361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.523375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.523780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.524228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.524242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.488 qpair failed and we were unable to recover it. 00:31:21.488 [2024-07-24 22:30:16.524638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.525034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.488 [2024-07-24 22:30:16.525050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.525456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.525934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.525947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.526406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.526808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.526821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.527214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.527670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.527683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.528134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.528537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.528550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.528955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.529434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.529447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.529671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.530080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.530093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.530565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.530989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.531002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.531482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.531904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.531917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.532321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.532796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.532809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.533149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.533588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.533601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.534064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.534517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.534530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.534885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.535287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.535301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.535728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.536179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.536193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.536547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.536907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.536920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.537220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.537615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.537629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.537982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.538368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.538381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.538787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.539256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.539269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.539731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.540148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.540162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.540546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.540948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.540962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.489 qpair failed and we were unable to recover it. 00:31:21.489 [2024-07-24 22:30:16.541420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.541821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.489 [2024-07-24 22:30:16.541834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.542296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.542749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.542762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.543218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.543670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.543683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.544089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.544477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.544490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.544944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.545373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.545386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.545732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.546445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.546460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.546855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.547306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.547320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.547779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.548170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.548184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.548590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.549004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.549017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.549319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.549796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.549809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.550212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.550669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.550682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.551113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.551510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.551522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.551934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.552357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.552371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.552800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.553183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.553196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.553597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.553848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.490 [2024-07-24 22:30:16.553861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.490 qpair failed and we were unable to recover it. 00:31:21.490 [2024-07-24 22:30:16.554153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.554632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.554645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.555054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.555482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.555496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.555887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.556340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.556353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.556800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.557191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.557205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.557549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.557935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.557948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.558351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.558811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.558825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.559170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.559582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.559596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.560078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.560580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.560597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.560949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.561195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.561210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.561685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.562145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.562159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.562555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.562959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.562972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.563377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.563537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.563550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.563938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.564340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.564354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.564823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.565227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.565241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.565644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.565985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.565998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.566450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.566855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.566868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.567209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.567680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.567694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.568031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.568503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.568517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.568971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.569239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.569252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.569665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.570127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.570141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.491 qpair failed and we were unable to recover it. 00:31:21.491 [2024-07-24 22:30:16.570594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.491 [2024-07-24 22:30:16.571068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.571082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.571538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.571932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.571945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.572335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.572785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.572798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.573008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.573417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.573431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.573939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.574343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.574356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.574695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.575098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.575112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.575504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.575980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.575993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.576345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.576819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.576835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.577238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.577651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.577664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.578143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.578541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.578555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.578963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.579292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.579306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.579770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.580223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.580236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.580678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.581148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.581161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.581556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.581967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.581981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.582434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.582820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.582833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.583262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.583740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.583754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.584206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.584557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.584570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.585230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.585626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.585640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.585990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.586464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.586479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.586897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.587365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.587378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.492 qpair failed and we were unable to recover it. 00:31:21.492 [2024-07-24 22:30:16.587785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.588195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.492 [2024-07-24 22:30:16.588208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.588623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.589055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.589068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.589477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.589806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.589819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.590292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.590694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.590707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.591131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.591526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.591540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.591969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.592371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.592385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.592839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.593255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.593269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.593722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.593957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.593970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.594368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.594868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.594882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.595229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.595580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.595593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.596025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.596369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.596382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.596855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.597275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.597289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.597766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.598185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.598199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.598536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.598858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.598871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.599214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.599691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.599704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.600052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.600470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.600483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.600979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.601337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.601351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.601738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.602189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.602203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12f7c00 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.602697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.603185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.603200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.493 [2024-07-24 22:30:16.603657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.603984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.493 [2024-07-24 22:30:16.603994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.493 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.604403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.604786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.604795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.605269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.605664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.605673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.606126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.606473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.606483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.606833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.607221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.607231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.607617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.608109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.608119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.608581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.609024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.609033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.609509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.609854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.609864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.610252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.610657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.610666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73dc000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.611047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.611520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.611535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.611970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.612312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.612326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.612806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.613260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.613273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.613635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.614112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.614125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.614600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.615054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.615068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.615545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.616021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.616034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.616470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.616883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.616896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.617388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.617725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.617738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.618144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.618559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.618572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.619051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.619453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.619466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.619817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.620055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.620068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.620521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.620923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.620936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.621343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.621806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.760 [2024-07-24 22:30:16.621819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.760 qpair failed and we were unable to recover it. 00:31:21.760 [2024-07-24 22:30:16.622224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.622644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.622657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.623110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.623587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.623600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.624075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.624378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.624391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.624745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.625204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.625218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.625606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.626091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.626104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.626582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.627056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.627069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.627470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.627945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.627958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.628356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.628836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.628849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.629267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.629746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.629760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.630128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.630598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.630611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.631002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.631357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.631370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.631850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.632303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.632316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.632772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.633245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.633258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.633682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.634072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.634085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.634497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.634974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.634987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.635395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.635797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.635810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.636215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.636689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.636703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.637180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.637636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.637649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.637921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.638085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.638098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.638578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.638923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.638935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.639408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.639803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.639816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.640218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.640617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.640630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.641095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.641546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.641559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.642013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.642407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.642421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.642711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.643121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.643134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.643557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.643960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.643973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.644307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.644784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.761 [2024-07-24 22:30:16.644797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.761 qpair failed and we were unable to recover it. 00:31:21.761 [2024-07-24 22:30:16.645273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.645728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.645741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.646199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.646601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.646614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.647012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.647461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.647474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.647953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.648369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.648382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.648860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.649263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.649277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.649730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.650182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.650195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.650648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.651052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.651064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.651522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.651931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.651944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.652337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.652789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.652802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.653297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.653713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.653726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.654058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.654445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.654460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.654919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.655369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.655383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.655800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.656204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.656217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.656700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.657108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.657121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.657598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.658019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.658032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.658490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.658819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.658833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.659306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.659783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.659796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.660196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.660669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.660682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.661074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.661543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.661556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.662034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.662405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.662418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.662872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.663273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.663288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.663562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.664029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.664045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.762 qpair failed and we were unable to recover it. 00:31:21.762 [2024-07-24 22:30:16.664524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.664921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.762 [2024-07-24 22:30:16.664934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.665344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.665797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.665810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.666223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.666695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.666708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.667097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.667514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.667528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.668006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.668409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.668423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.668828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.669301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.669315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.669790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.670241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.670255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.670614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.671065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.671078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.671423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.671776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.671792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.672244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.672649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.672662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.673115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.673511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.673525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.673916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.674339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.674353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.674840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.675265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.675279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.675758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.676236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.676250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.676667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.677145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.677158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.677612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.678007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.678021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.678214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.678647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.678662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.679068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.679472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.679487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.679888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.680340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.680358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.680774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.681258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.681273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.681689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.682167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.682182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.682636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.683055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.683070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.683420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.683882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.683896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.684222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.684705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.684719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.685056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.685402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.685417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.763 qpair failed and we were unable to recover it. 00:31:21.763 [2024-07-24 22:30:16.685829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.686185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.763 [2024-07-24 22:30:16.686200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.686608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.687078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.687093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.687549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.688004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.688018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.688423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.688879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.688894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.689305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.689647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.689661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.690118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.690537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.690551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.690965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.691351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.691367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.691757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.692233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.692248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.692730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.693135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.693150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.693626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.694036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.694054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.694461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.694668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.694682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.694996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.695207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.695222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.695702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.695991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.696005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.696349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.696803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.696817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.697038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.697495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.697510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.697913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.698321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.698336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.698817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.699232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.699248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.699636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.700114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.700129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.700559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.700983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.700998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.701229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.701713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.701727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.702131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.702543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.702558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.703034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.703428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.703443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.703852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.704338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.704353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.704767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.705119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.705134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.705562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.705908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.705923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.706317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.706808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.706823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.707228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.707457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.764 [2024-07-24 22:30:16.707472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.764 qpair failed and we were unable to recover it. 00:31:21.764 [2024-07-24 22:30:16.707893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.708307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.708322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.708531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.708951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.708966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.709363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.709840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.709854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.710271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.710750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.710765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.711091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.711486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.711501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.711988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.712386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.712401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.712803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.713207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.713221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.713636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.714052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.714067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.714726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.715016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.715029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.715444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.715652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.715665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.716118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.716537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.716550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.716960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.717376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.717391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.717745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.718176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.718190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.718669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.719120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.719134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.719533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.719878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.719891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.720360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.720745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.720758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.721092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.721563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.721576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.721976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.722440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.722453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.722932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.723261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.723275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.723624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.724053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.724067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.724461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.724915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.724928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.765 qpair failed and we were unable to recover it. 00:31:21.765 [2024-07-24 22:30:16.725217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.765 [2024-07-24 22:30:16.725613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.725626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.725837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.726091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.726105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.726556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.727034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.727051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.727512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.727901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.727914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.728390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.728729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.728742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.729221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.729709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.729723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.730128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.730526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.730539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.730994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.731403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.731418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.731738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.732140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.732154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.732628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.732961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.732974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.733315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.733617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.733631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.734108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.734624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.734638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.734968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.735419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.735433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.735912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.736257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.736272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.736484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.736934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.736948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.737289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.737741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.737755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.738188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.738398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.738412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.738805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.739205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.739219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.739708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.740051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.740065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.740540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.740955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.740968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.741116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.741534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.741547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.741950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.742156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.742169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.742641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.743093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.743107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.743322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.743678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.743691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.744034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.744339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.744353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.744831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.745242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.745256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.745661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.746021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.746035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.746678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.747016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.747029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.766 qpair failed and we were unable to recover it. 00:31:21.766 [2024-07-24 22:30:16.747485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.747903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.766 [2024-07-24 22:30:16.747917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.748157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.748769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.748783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.749265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.749706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.749720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.750165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.750643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.750656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.751134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.751292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.751306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.751638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.752029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.752047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.752452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.752861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.752874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.753165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.753559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.753572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.753978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.754388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.754401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.754805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.755497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.755514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.755973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.756436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.756449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.756750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.757055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.757069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.757417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.757812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.757826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.758223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.758566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.758579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.758912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.759310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.759324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.759677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.760087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.760100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.760533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.760919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.760932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.761329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.761675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.761688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.762114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.762566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.762579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.762986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.763391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.763406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.763744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.764139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.764152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.764569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.764968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.764981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.765410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.765750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.765763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.766231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.766702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.766715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.766945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.767343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.767357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.767783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.768117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.768131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.768600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.768952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.768966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.769389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.769802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.769815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.767 [2024-07-24 22:30:16.770108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.770594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.767 [2024-07-24 22:30:16.770607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.767 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.771031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.771459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.771473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.771863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.772265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.772278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.772700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.773052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.773066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.773419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.773816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.773829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.774287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.774704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.774718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.775143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.775553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.775568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.776049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.776511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.776524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.776690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.777057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.777071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.777482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.777876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.777891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.778351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.778690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.778705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.779121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.779526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.779539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.779933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.780337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.780350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.780681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.781163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.781176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.781553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.781965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.781982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.782331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.782737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.782750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.783147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.783608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.783623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.783972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.784316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.784331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.784671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.785014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.785028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.785438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.785832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.785846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.786251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.786652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.786670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.787305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.787644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.787657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.788058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.788437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.788450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.788838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.789170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.789184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.789602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.790003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.790016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.790435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.790785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.790799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.791424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.791746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.791761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.792168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.792576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.792589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.793052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.793454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.793468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.768 qpair failed and we were unable to recover it. 00:31:21.768 [2024-07-24 22:30:16.793813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.768 [2024-07-24 22:30:16.794220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.794240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.794890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.795236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.795252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.795645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.796059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.796073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.796414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.796817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.796831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.797228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.797566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.797580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.797921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.798099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.798112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.798458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.798803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.798817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.799165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.799514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.799528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.799944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.800299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.800313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.800647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.801280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.801306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.801748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.802083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.802098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.802447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.803058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.803077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.803324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.803718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.803733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.804132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.804462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.804475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.804869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.805209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.805224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.805565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.805920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.805934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.806285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.806994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.807009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.807414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.807816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.807830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.808133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.808526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.808539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.808692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.809038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.809057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.809459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.809791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.809804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.810282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.810627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.810641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.810997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.811658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.811671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.812069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.812475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.812489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.812823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.813174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.813187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.813593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.814007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.814021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.769 qpair failed and we were unable to recover it. 00:31:21.769 [2024-07-24 22:30:16.814371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.814723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.769 [2024-07-24 22:30:16.814736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.815069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.815695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.815708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.816107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.816520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.816533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.816883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.817310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.817324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.817660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.818063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.818077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.818433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.818774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.818787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.819136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.819466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.819479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.819806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.820454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.820469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.820873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.821329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.821343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.821761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.822096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.822109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.822432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.822832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.822846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.823254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.823600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.823613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.823963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.824399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.824414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.824861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.825198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.825213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.825543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.825891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.825905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.826258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.826597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.826611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.826939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.827380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.827394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.827797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.828198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.828213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.828563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.828881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.828896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.829302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.829647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.829661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.830229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.830631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.830645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.831058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.831388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.831401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.831748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.832070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.832084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.832239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.832634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.832648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.832979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.833399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.833413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.833987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.834383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.770 [2024-07-24 22:30:16.834397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.770 qpair failed and we were unable to recover it. 00:31:21.770 [2024-07-24 22:30:16.834733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.835082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.835097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.835311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.835667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.835682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.836079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.836593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.836607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.836974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.837312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.837326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.837729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.838113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.838127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.838482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.838828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.838842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.839254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.839706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.839720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.840126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.840525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.840540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.840875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.841224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.841239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.841578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.841975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.841989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.842387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.842795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.842808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.843202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.843603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.843616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.843953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.844287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.844301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.844702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.845097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.845112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.845511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.845844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.845858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.846415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.846815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.846829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.847255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.847596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.847609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.847930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.848323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.848337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.848889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.849283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.849298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.849645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.850012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.850026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.850379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.850707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.850721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.851067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.851396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.851410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.851819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.852299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.852314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.852717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.853052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.853066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.853460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.853928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.853942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.854287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.854743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.854756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.855158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.855500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.855515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.855860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.856268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.856282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.771 qpair failed and we were unable to recover it. 00:31:21.771 [2024-07-24 22:30:16.856615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.771 [2024-07-24 22:30:16.856960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.856973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.857310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.857539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.857554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.857897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.858305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.858320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.858645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.859032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.859049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.859397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.859730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.859744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.860154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.860485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.860498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.860898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.861362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.861376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.861718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.862056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.862070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.862715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.863168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.863183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.863512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.863852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.863866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.864348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.864703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.864717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.864951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.865354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.865368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.865717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.866177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.866192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.866545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.866877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.866891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.867284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.867703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.867717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.868075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.868424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.868438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.869031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.869321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.869336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.869561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.869911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.869925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.870322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.870728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.870743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.871092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.871446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.871459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.871878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.872304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.872318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.872736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.873149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.873163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.873509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.873911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.873925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.874379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.874772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.874786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.875188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.875532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.875545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.875874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.876271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.876284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.876619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.877080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.877094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.772 qpair failed and we were unable to recover it. 00:31:21.772 [2024-07-24 22:30:16.877574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.772 [2024-07-24 22:30:16.877905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.877918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.878323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.878723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.878737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.878976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.879386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.879399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.879839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.880276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.880290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.880816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.881317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.881330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.881730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.882205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.882219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.882627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.883038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.883057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.883447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.883807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.883820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.884229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.884582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.884596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.884885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.885272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:21.773 [2024-07-24 22:30:16.885286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:21.773 qpair failed and we were unable to recover it. 00:31:21.773 [2024-07-24 22:30:16.885630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.886134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.886148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.886496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.886859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.886873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.887216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.887555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.887568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.888004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.888420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.888434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.888838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.889198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.889211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.889666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.890080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.890094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.890495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.890889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.890903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.891398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.891585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.891599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.892116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.892523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.892536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.892899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.893255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 [2024-07-24 22:30:16.893269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.035 qpair failed and we were unable to recover it. 00:31:22.035 [2024-07-24 22:30:16.893622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.035 22:30:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:22.035 [2024-07-24 22:30:16.894032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.894060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 22:30:16 -- common/autotest_common.sh@852 -- # return 0 00:31:22.036 22:30:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:22.036 22:30:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:22.036 [2024-07-24 22:30:16.894412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:22.036 [2024-07-24 22:30:16.894763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.894777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.895194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.895552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.895566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.896018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.896430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.896444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.896630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.896978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.896996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.897390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.897798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.897812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.898204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.898578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.898592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.898994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.899416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.899430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.899835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.900059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.900073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.900504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.900844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.900858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.901215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.901571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.901585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.901925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.902281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.902295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.902626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.903111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.903126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.903537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.904029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.904049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.904568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.905046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.905061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.905506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.905742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.905756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.906249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.906677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.906691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.907104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.907489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.907502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.907913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.908439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.908453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.908815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.909247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.909262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.909671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.910224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.910238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.910598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.911092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.911106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.911539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.912032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.912051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.912436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.913109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.913124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.913562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.914097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.914112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.914537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.914894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.914907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.915368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.915771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.915785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.916143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.916505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.916519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.916881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.917251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.917265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.917629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.918063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.918077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.918438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.918791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.918805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.919270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.919619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.919632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.920168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.920595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.920609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.921054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.921407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.921421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.921828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.922304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.922318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.922685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.923149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.923163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.923572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.924015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.924029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.036 [2024-07-24 22:30:16.924532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.924901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.036 [2024-07-24 22:30:16.924915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.036 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.925339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.925698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.925712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.926176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.926531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.926546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.926983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.927465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.927480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.927842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.928345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.928361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 22:30:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.037 [2024-07-24 22:30:16.929046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 22:30:16 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:22.037 [2024-07-24 22:30:16.929518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.929534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 22:30:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.037 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 [2024-07-24 22:30:16.929896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.930304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.930318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73e4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.930756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.931255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.931273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.931721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.932211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.932227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.932645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.933116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.933132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.933498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.933852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.933866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.934387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.934793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.934807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.935210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.935642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.935655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.936122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.936490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.936504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.936849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.937322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.937337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.937692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.938177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.938192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.938604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.939052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.939067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.939481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.939942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.939956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.940353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.940764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.940779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.941181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.941575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.941591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.942023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.942452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.942469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.942874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.943346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.943364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.943785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.944266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.944285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.944779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.945190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.945205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.945622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.946049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.946063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.946506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 Malloc0 00:31:22.037 [2024-07-24 22:30:16.946867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.946888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 22:30:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.037 22:30:16 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:22.037 [2024-07-24 22:30:16.947387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 22:30:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.037 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:22.037 [2024-07-24 22:30:16.947872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.947902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.948433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.948889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.948905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.949310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.949718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.949732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.950189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.950563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.950577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.951023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.951445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.951459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.951945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.952440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.952454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.952812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.953273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.953287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.953689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.954158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.954172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.954583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.954583] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.037 [2024-07-24 22:30:16.955131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.955145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.037 [2024-07-24 22:30:16.955519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.956034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.037 [2024-07-24 22:30:16.956051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.037 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.956450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.956909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.956922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.957404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.957888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.957902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.958461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.958868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.958881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.959287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.959758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.959771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.960224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.960588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.960601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.960951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.961442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.961456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.961876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.962298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.962311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.962727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 22:30:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.038 22:30:16 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:22.038 [2024-07-24 22:30:16.963183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.963210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 22:30:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.038 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:22.038 [2024-07-24 22:30:16.963601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.964091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.964107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.964594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.965109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.965124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.965550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.965951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.965964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.966404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.966753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.966766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.967244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.967608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.967621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.968048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.968542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.968556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.968993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.969404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.969418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.969826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.970347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.970361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.970779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 22:30:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.038 22:30:16 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:22.038 22:30:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.038 [2024-07-24 22:30:16.971301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:22.038 [2024-07-24 22:30:16.971326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.971779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.972163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.972178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.972613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.973050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.973064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.973499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.973916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.973929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.974324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.974765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.974779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.975125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.975481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.975494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.975894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.976364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.976378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.976739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.977265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.977279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.977763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.978172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.978185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.978586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 22:30:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.038 [2024-07-24 22:30:16.979035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.979069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 22:30:16 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.038 22:30:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.038 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:22.038 [2024-07-24 22:30:16.979501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.979968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.979984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.980384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.980737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.980752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.981250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.981692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.981706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.982192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.982609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:22.038 [2024-07-24 22:30:16.982623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f73d4000b90 with addr=10.0.0.2, port=4420 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 [2024-07-24 22:30:16.982849] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.038 [2024-07-24 22:30:16.985264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.038 [2024-07-24 22:30:16.985461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.038 [2024-07-24 22:30:16.985487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.038 [2024-07-24 22:30:16.985498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.038 [2024-07-24 22:30:16.985507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.038 [2024-07-24 22:30:16.985534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.038 qpair failed and we were unable to recover it. 00:31:22.038 22:30:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.038 22:30:16 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:22.038 22:30:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:22.038 22:30:16 -- common/autotest_common.sh@10 -- # set +x 00:31:22.038 22:30:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:22.038 22:30:16 -- host/target_disconnect.sh@58 -- # wait 3745081 00:31:22.038 [2024-07-24 22:30:16.995324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.038 [2024-07-24 22:30:16.995517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.038 [2024-07-24 22:30:16.995544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.038 [2024-07-24 22:30:16.995554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:16.995563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:16.995588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.005135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.005459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.005478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.005485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.005490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.005507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.015136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.015290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.015307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.015313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.015319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.015335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.025196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.025340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.025357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.025364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.025370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.025386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.035207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.035341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.035358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.035365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.035370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.035387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.045238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.045380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.045397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.045403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.045409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.045426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.055254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.055427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.055445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.055452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.055462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.055479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.065291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.065462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.065479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.065486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.065492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.065509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.075309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.075442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.075459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.075467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.075473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.075490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.085351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.085501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.085518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.085525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.085531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.085549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.095341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.095481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.095497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.095504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.095511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.095528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.105367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.105515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.105533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.105540] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.105547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.105564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.115473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.115659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.115676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.115683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.115690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.115707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.125479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.125617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.125634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.125641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.125647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.125664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.135456] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.039 [2024-07-24 22:30:17.135594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.039 [2024-07-24 22:30:17.135611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.039 [2024-07-24 22:30:17.135618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.039 [2024-07-24 22:30:17.135624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.039 [2024-07-24 22:30:17.135641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.039 qpair failed and we were unable to recover it. 00:31:22.039 [2024-07-24 22:30:17.145517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.040 [2024-07-24 22:30:17.145662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.040 [2024-07-24 22:30:17.145679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.040 [2024-07-24 22:30:17.145688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.040 [2024-07-24 22:30:17.145695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.040 [2024-07-24 22:30:17.145712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.040 qpair failed and we were unable to recover it. 00:31:22.040 [2024-07-24 22:30:17.155486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.040 [2024-07-24 22:30:17.155622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.040 [2024-07-24 22:30:17.155640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.040 [2024-07-24 22:30:17.155646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.040 [2024-07-24 22:30:17.155652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.040 [2024-07-24 22:30:17.155668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.040 qpair failed and we were unable to recover it. 00:31:22.300 [2024-07-24 22:30:17.165589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.300 [2024-07-24 22:30:17.165745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.300 [2024-07-24 22:30:17.165762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.300 [2024-07-24 22:30:17.165769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.300 [2024-07-24 22:30:17.165775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.300 [2024-07-24 22:30:17.165792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-07-24 22:30:17.175629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.300 [2024-07-24 22:30:17.175774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.300 [2024-07-24 22:30:17.175790] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.300 [2024-07-24 22:30:17.175797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.300 [2024-07-24 22:30:17.175804] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.300 [2024-07-24 22:30:17.175820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-07-24 22:30:17.185625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.300 [2024-07-24 22:30:17.185774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.300 [2024-07-24 22:30:17.185792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.300 [2024-07-24 22:30:17.185800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.300 [2024-07-24 22:30:17.185806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.300 [2024-07-24 22:30:17.185824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.300 [2024-07-24 22:30:17.195649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.300 [2024-07-24 22:30:17.195788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.300 [2024-07-24 22:30:17.195807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.300 [2024-07-24 22:30:17.195815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.300 [2024-07-24 22:30:17.195822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.300 [2024-07-24 22:30:17.195839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.300 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.205695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.205852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.205869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.205876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.205882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.205898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.215787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.215931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.215947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.215955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.215960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.215977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.225802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.225942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.225959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.225966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.225972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.225989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.235770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.235905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.235922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.235932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.235938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.235955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.245862] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.245998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.246015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.246023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.246029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.246051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.255828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.255999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.256016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.256023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.256029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.256051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.265872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.266014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.266031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.266038] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.266049] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.266066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.275908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.276041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.276063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.276070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.276076] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.276093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.285959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.286103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.286120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.286128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.286134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.286151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.295885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.296029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.296050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.296058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.296064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.296082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.305970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.306115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.306132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.306139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.306145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.306162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.316224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.316364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.301 [2024-07-24 22:30:17.316380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.301 [2024-07-24 22:30:17.316388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.301 [2024-07-24 22:30:17.316394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.301 [2024-07-24 22:30:17.316411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.301 qpair failed and we were unable to recover it. 00:31:22.301 [2024-07-24 22:30:17.326068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.301 [2024-07-24 22:30:17.326210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.326230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.326238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.326244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.326260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.336093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.336243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.336261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.336268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.336274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.336290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.346075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.346214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.346231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.346238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.346244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.346260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.356122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.356272] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.356290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.356297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.356304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.356321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.366184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.366336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.366352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.366359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.366365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.366385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.376141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.376277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.376294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.376301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.376308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.376324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.386198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.386344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.386360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.386368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.386374] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.386391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.396227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.396368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.396385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.396392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.396399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.396415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.406236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.406624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.406642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.406650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.406657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.406674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.416277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.416421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.416443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.416450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.416456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.416473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.302 [2024-07-24 22:30:17.426297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.302 [2024-07-24 22:30:17.426456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.302 [2024-07-24 22:30:17.426473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.302 [2024-07-24 22:30:17.426480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.302 [2024-07-24 22:30:17.426486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.302 [2024-07-24 22:30:17.426503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.302 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.436367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.436513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.436531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.436537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.436543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.436560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.446375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.446527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.446544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.446551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.446557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.446573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.456450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.456594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.456610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.456617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.456626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.456643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.466354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.466494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.466510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.466517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.466524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.466540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.476500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.476654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.476671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.476679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.476685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.476702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.486468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.486607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.486624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.486632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.486638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.486655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.496536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.496678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.496695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.496702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.496708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.496724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.506534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.506680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.506697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.506704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.506710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.506726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.516583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.516721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.516738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.516745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.516751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.516767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.526601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.526753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.526770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.526777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.526784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.526800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.536617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.536753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.536770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.536777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.536783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.563 [2024-07-24 22:30:17.536799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.563 qpair failed and we were unable to recover it. 00:31:22.563 [2024-07-24 22:30:17.546645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.563 [2024-07-24 22:30:17.546787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.563 [2024-07-24 22:30:17.546804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.563 [2024-07-24 22:30:17.546811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.563 [2024-07-24 22:30:17.546820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.546837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.556700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.556840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.556857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.556864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.556871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.556887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.566760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.566898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.566915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.566922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.566929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.566946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.576766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.576906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.576923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.576930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.576936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.576952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.586758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.586899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.586917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.586925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.586932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.586950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.596815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.596958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.596975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.596982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.596989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.597006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.606759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.607084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.607103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.607110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.607117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.607135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.616859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.616996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.617012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.617020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.617026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.617048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.626887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.627029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.627052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.627060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.627067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.627084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.636928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.637101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.637118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.637128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.637134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.637150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.646981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.647131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.647148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.647155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.647161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.647177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.656923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.657067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.657084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.657091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.657097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.657114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.667018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.667192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.667209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.667216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.667223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.667240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.564 [2024-07-24 22:30:17.677087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.564 [2024-07-24 22:30:17.677227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.564 [2024-07-24 22:30:17.677244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.564 [2024-07-24 22:30:17.677251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.564 [2024-07-24 22:30:17.677257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.564 [2024-07-24 22:30:17.677274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.564 qpair failed and we were unable to recover it. 00:31:22.565 [2024-07-24 22:30:17.687070] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.565 [2024-07-24 22:30:17.687211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.565 [2024-07-24 22:30:17.687227] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.565 [2024-07-24 22:30:17.687234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.565 [2024-07-24 22:30:17.687241] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.565 [2024-07-24 22:30:17.687257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.565 qpair failed and we were unable to recover it. 00:31:22.825 [2024-07-24 22:30:17.697139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.825 [2024-07-24 22:30:17.697325] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.825 [2024-07-24 22:30:17.697342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.825 [2024-07-24 22:30:17.697350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.825 [2024-07-24 22:30:17.697357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.825 [2024-07-24 22:30:17.697374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.825 qpair failed and we were unable to recover it. 00:31:22.825 [2024-07-24 22:30:17.707115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.825 [2024-07-24 22:30:17.707255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.825 [2024-07-24 22:30:17.707271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.825 [2024-07-24 22:30:17.707278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.825 [2024-07-24 22:30:17.707284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.825 [2024-07-24 22:30:17.707301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.825 qpair failed and we were unable to recover it. 00:31:22.825 [2024-07-24 22:30:17.717168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.825 [2024-07-24 22:30:17.717311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.825 [2024-07-24 22:30:17.717328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.825 [2024-07-24 22:30:17.717335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.825 [2024-07-24 22:30:17.717342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.825 [2024-07-24 22:30:17.717358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.825 qpair failed and we were unable to recover it. 00:31:22.825 [2024-07-24 22:30:17.727203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.825 [2024-07-24 22:30:17.727345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.825 [2024-07-24 22:30:17.727361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.825 [2024-07-24 22:30:17.727372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.825 [2024-07-24 22:30:17.727379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.825 [2024-07-24 22:30:17.727395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.825 qpair failed and we were unable to recover it. 00:31:22.825 [2024-07-24 22:30:17.737250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.737387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.737406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.737414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.737420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.737437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.747234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.747375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.747391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.747399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.747405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.747421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.757288] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.757431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.757449] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.757456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.757462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.757479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.767313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.767455] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.767472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.767479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.767485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.767501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.777362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.777502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.777519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.777526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.777532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.777549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.787301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.787438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.787458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.787466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.787474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.787491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.797330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.797472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.797489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.797496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.797502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.797519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.807430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.807583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.807601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.807607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.807614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.807630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.817451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.817625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.817645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.817652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.817662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.817678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.827480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.827616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.827632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.827639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.827645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.827662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.837527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.837665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.837682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.837689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.837695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.837711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.847556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.847734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.847750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.847758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.847765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.847782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.857589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.857755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.857771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.826 [2024-07-24 22:30:17.857778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.826 [2024-07-24 22:30:17.857784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.826 [2024-07-24 22:30:17.857805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.826 qpair failed and we were unable to recover it. 00:31:22.826 [2024-07-24 22:30:17.867589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.826 [2024-07-24 22:30:17.867730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.826 [2024-07-24 22:30:17.867747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.827 [2024-07-24 22:30:17.867754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.827 [2024-07-24 22:30:17.867760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.827 [2024-07-24 22:30:17.867776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.827 qpair failed and we were unable to recover it. 00:31:22.827 [2024-07-24 22:30:17.877629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.827 [2024-07-24 22:30:17.877796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.827 [2024-07-24 22:30:17.877813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.827 [2024-07-24 22:30:17.877820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.827 [2024-07-24 22:30:17.877826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.827 [2024-07-24 22:30:17.877842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.827 qpair failed and we were unable to recover it. 00:31:22.827 [2024-07-24 22:30:17.887729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.827 [2024-07-24 22:30:17.887868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.827 [2024-07-24 22:30:17.887886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.827 [2024-07-24 22:30:17.887894] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.827 [2024-07-24 22:30:17.887901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.827 [2024-07-24 22:30:17.887918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.827 qpair failed and we were unable to recover it. 00:31:22.827 [2024-07-24 22:30:17.897711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.827 [2024-07-24 22:30:17.897849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.827 [2024-07-24 22:30:17.897866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.827 [2024-07-24 22:30:17.897873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.827 [2024-07-24 22:30:17.897880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.827 [2024-07-24 22:30:17.897896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.827 qpair failed and we were unable to recover it. 00:31:22.827 [2024-07-24 22:30:17.907714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.827 [2024-07-24 22:30:17.907853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.827 [2024-07-24 22:30:17.907873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.827 [2024-07-24 22:30:17.907880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.827 [2024-07-24 22:30:17.907886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.827 [2024-07-24 22:30:17.907902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.827 qpair failed and we were unable to recover it. 00:31:22.827 [2024-07-24 22:30:17.917767] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.827 [2024-07-24 22:30:17.917907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.827 [2024-07-24 22:30:17.917924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.827 [2024-07-24 22:30:17.917931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.827 [2024-07-24 22:30:17.917938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.827 [2024-07-24 22:30:17.917955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.827 qpair failed and we were unable to recover it. 00:31:22.827 [2024-07-24 22:30:17.927797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.827 [2024-07-24 22:30:17.927938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.827 [2024-07-24 22:30:17.927954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.827 [2024-07-24 22:30:17.927961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.827 [2024-07-24 22:30:17.927967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.827 [2024-07-24 22:30:17.927984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.827 qpair failed and we were unable to recover it. 00:31:22.827 [2024-07-24 22:30:17.937789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.827 [2024-07-24 22:30:17.937937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.827 [2024-07-24 22:30:17.937954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.827 [2024-07-24 22:30:17.937961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.827 [2024-07-24 22:30:17.937967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.827 [2024-07-24 22:30:17.937983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.827 qpair failed and we were unable to recover it. 00:31:22.827 [2024-07-24 22:30:17.947843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:22.827 [2024-07-24 22:30:17.947991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:22.827 [2024-07-24 22:30:17.948008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:22.827 [2024-07-24 22:30:17.948015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:22.827 [2024-07-24 22:30:17.948021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:22.827 [2024-07-24 22:30:17.948040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:22.827 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:17.957900] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:17.958058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:17.958083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:17.958091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:17.958098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:23.088 [2024-07-24 22:30:17.958115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:17.967918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:17.968063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:17.968082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:17.968089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:17.968096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:23.088 [2024-07-24 22:30:17.968114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:17.968228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13056b0 is same with the state(5) to be set 00:31:23.088 [2024-07-24 22:30:17.977982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:17.978195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:17.978223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:17.978234] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:17.978244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:17.978268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:17.987930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:17.988078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:17.988098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:17.988106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:17.988112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:17.988130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:17.998187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:17.998338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:17.998358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:17.998365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:17.998371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:17.998388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:18.008014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:18.008166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:18.008186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:18.008193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:18.008199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:18.008216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:18.018063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:18.018205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:18.018224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:18.018231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:18.018238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:18.018254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:18.028092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:18.028234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:18.028254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:18.028261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:18.028267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:18.028284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:18.038119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:18.038267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:18.038294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:18.038305] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:18.038311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:18.038328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:18.048103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:18.048249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:18.048269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:18.048276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:18.048283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:18.048300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:18.058204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:18.058351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:18.058370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:18.058378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:18.058385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:18.058403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.088 qpair failed and we were unable to recover it. 00:31:23.088 [2024-07-24 22:30:18.068265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.088 [2024-07-24 22:30:18.068415] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.088 [2024-07-24 22:30:18.068435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.088 [2024-07-24 22:30:18.068442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.088 [2024-07-24 22:30:18.068448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.088 [2024-07-24 22:30:18.068464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.078242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.078388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.078407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.078414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.078421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.078437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.088325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.088474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.088494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.088501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.088508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.088525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.098312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.098463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.098481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.098489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.098495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.098511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.108333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.108484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.108504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.108511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.108517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.108535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.118335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.118519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.118538] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.118545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.118552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.118570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.128371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.128506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.128526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.128536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.128543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.128559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.138406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.138548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.138569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.138576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.138582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.138599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.148433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.148587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.148607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.148614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.148621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.148638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.158459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.158597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.158617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.158624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.158630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.158648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.168519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.168669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.168688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.168695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.168702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.168719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.178481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.178621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.178640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.178648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.178654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.178670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.188549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.089 [2024-07-24 22:30:18.188700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.089 [2024-07-24 22:30:18.188720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.089 [2024-07-24 22:30:18.188727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.089 [2024-07-24 22:30:18.188733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.089 [2024-07-24 22:30:18.188750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.089 qpair failed and we were unable to recover it. 00:31:23.089 [2024-07-24 22:30:18.198580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.090 [2024-07-24 22:30:18.198756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.090 [2024-07-24 22:30:18.198777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.090 [2024-07-24 22:30:18.198784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.090 [2024-07-24 22:30:18.198791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.090 [2024-07-24 22:30:18.198807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.090 qpair failed and we were unable to recover it. 00:31:23.090 [2024-07-24 22:30:18.208663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.090 [2024-07-24 22:30:18.208807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.090 [2024-07-24 22:30:18.208827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.090 [2024-07-24 22:30:18.208834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.090 [2024-07-24 22:30:18.208841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.090 [2024-07-24 22:30:18.208858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.090 qpair failed and we were unable to recover it. 00:31:23.090 [2024-07-24 22:30:18.218705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.090 [2024-07-24 22:30:18.218873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.090 [2024-07-24 22:30:18.218893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.090 [2024-07-24 22:30:18.218904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.090 [2024-07-24 22:30:18.218910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.090 [2024-07-24 22:30:18.218927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.090 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.228623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.228797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.228816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.228823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.228830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.228847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.238696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.238872] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.238892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.238899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.238905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.238922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.248711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.248848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.248869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.248878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.248886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.248904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.258766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.258925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.258945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.258952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.258958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.258975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.268771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.268909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.268928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.268936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.268942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.268959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.278807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.278942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.278961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.278968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.278975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.278992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.288783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.288921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.288941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.288948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.288955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.288972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.298892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.299088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.299107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.299114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.299121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.299139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.308896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.309038] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.309070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.309078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.309084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.309101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.318887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.319028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.319053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.319061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.319068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.319085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.328896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.329031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.329057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.351 [2024-07-24 22:30:18.329064] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.351 [2024-07-24 22:30:18.329071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.351 [2024-07-24 22:30:18.329089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.351 qpair failed and we were unable to recover it. 00:31:23.351 [2024-07-24 22:30:18.339005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.351 [2024-07-24 22:30:18.339336] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.351 [2024-07-24 22:30:18.339355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.339362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.339369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.339386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.349038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.349215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.349234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.349242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.349248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.349266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.358997] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.359142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.359162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.359169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.359175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.359193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.369107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.369246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.369265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.369272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.369278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.369295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.379073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.379214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.379233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.379241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.379247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.379264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.389095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.389236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.389256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.389263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.389270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.389287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.399177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.399328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.399351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.399359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.399365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.399382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.409192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.409364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.409384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.409391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.409398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.409415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.419191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.419329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.419349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.419356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.419363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.419379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.429194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.429337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.429357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.429365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.429371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.429389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.439239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.439384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.439405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.439412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.439418] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.439441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.449257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.449399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.449419] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.449426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.449433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.449450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.459351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.459502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.459521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.459529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.459536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.352 [2024-07-24 22:30:18.459553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.352 qpair failed and we were unable to recover it. 00:31:23.352 [2024-07-24 22:30:18.469380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.352 [2024-07-24 22:30:18.469521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.352 [2024-07-24 22:30:18.469541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.352 [2024-07-24 22:30:18.469549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.352 [2024-07-24 22:30:18.469555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.353 [2024-07-24 22:30:18.469573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.353 qpair failed and we were unable to recover it. 00:31:23.353 [2024-07-24 22:30:18.479401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.353 [2024-07-24 22:30:18.479600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.353 [2024-07-24 22:30:18.479619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.353 [2024-07-24 22:30:18.479627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.353 [2024-07-24 22:30:18.479633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.353 [2024-07-24 22:30:18.479650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.353 qpair failed and we were unable to recover it. 00:31:23.613 [2024-07-24 22:30:18.489435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.613 [2024-07-24 22:30:18.489577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.613 [2024-07-24 22:30:18.489600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.613 [2024-07-24 22:30:18.489607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.613 [2024-07-24 22:30:18.489614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.613 [2024-07-24 22:30:18.489630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.613 qpair failed and we were unable to recover it. 00:31:23.613 [2024-07-24 22:30:18.499449] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.613 [2024-07-24 22:30:18.499632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.613 [2024-07-24 22:30:18.499653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.613 [2024-07-24 22:30:18.499660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.613 [2024-07-24 22:30:18.499667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.613 [2024-07-24 22:30:18.499685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.613 qpair failed and we were unable to recover it. 00:31:23.613 [2024-07-24 22:30:18.509489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.613 [2024-07-24 22:30:18.509629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.613 [2024-07-24 22:30:18.509648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.613 [2024-07-24 22:30:18.509655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.613 [2024-07-24 22:30:18.509662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.613 [2024-07-24 22:30:18.509679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.613 qpair failed and we were unable to recover it. 00:31:23.613 [2024-07-24 22:30:18.519666] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.613 [2024-07-24 22:30:18.519803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.613 [2024-07-24 22:30:18.519822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.613 [2024-07-24 22:30:18.519829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.613 [2024-07-24 22:30:18.519836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.613 [2024-07-24 22:30:18.519853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.613 qpair failed and we were unable to recover it. 00:31:23.613 [2024-07-24 22:30:18.529552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.613 [2024-07-24 22:30:18.529691] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.613 [2024-07-24 22:30:18.529710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.613 [2024-07-24 22:30:18.529717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.613 [2024-07-24 22:30:18.529724] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.613 [2024-07-24 22:30:18.529744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.613 qpair failed and we were unable to recover it. 00:31:23.613 [2024-07-24 22:30:18.539589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.613 [2024-07-24 22:30:18.539727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.613 [2024-07-24 22:30:18.539747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.613 [2024-07-24 22:30:18.539754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.613 [2024-07-24 22:30:18.539761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.613 [2024-07-24 22:30:18.539778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.613 qpair failed and we were unable to recover it. 00:31:23.613 [2024-07-24 22:30:18.549561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.613 [2024-07-24 22:30:18.549702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.613 [2024-07-24 22:30:18.549722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.613 [2024-07-24 22:30:18.549729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.613 [2024-07-24 22:30:18.549736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.613 [2024-07-24 22:30:18.549754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.613 qpair failed and we were unable to recover it. 00:31:23.613 [2024-07-24 22:30:18.559591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.613 [2024-07-24 22:30:18.559732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.613 [2024-07-24 22:30:18.559751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.613 [2024-07-24 22:30:18.559758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.613 [2024-07-24 22:30:18.559765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.613 [2024-07-24 22:30:18.559781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.613 qpair failed and we were unable to recover it. 00:31:23.613 [2024-07-24 22:30:18.569676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.569821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.569840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.569847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.569853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.569870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.579661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.579804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.579826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.579834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.579841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.579858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.589682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.589818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.589837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.589844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.589850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.589867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.599710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.599852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.599872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.599879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.599887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.599903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.609833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.610017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.610036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.610049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.610056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.610073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.619786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.619929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.619949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.619956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.619963] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.619983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.629831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.629973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.629992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.629999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.630005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.630022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.639850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.639990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.640009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.640017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.640023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.640039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.649915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.650065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.650085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.650091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.650098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.650114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.659923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.660073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.660092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.660099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.660105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.660122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.669978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.670130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.670153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.670160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.670166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.670184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.680002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.680149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.680175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.680183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.680190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.680208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.690121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.690291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.690311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.690318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.614 [2024-07-24 22:30:18.690325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.614 [2024-07-24 22:30:18.690342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.614 qpair failed and we were unable to recover it. 00:31:23.614 [2024-07-24 22:30:18.700013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.614 [2024-07-24 22:30:18.700158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.614 [2024-07-24 22:30:18.700179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.614 [2024-07-24 22:30:18.700186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.615 [2024-07-24 22:30:18.700193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.615 [2024-07-24 22:30:18.700211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.615 qpair failed and we were unable to recover it. 00:31:23.615 [2024-07-24 22:30:18.710090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.615 [2024-07-24 22:30:18.710243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.615 [2024-07-24 22:30:18.710262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.615 [2024-07-24 22:30:18.710270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.615 [2024-07-24 22:30:18.710277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.615 [2024-07-24 22:30:18.710296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.615 qpair failed and we were unable to recover it. 00:31:23.615 [2024-07-24 22:30:18.720068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.615 [2024-07-24 22:30:18.720213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.615 [2024-07-24 22:30:18.720233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.615 [2024-07-24 22:30:18.720239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.615 [2024-07-24 22:30:18.720246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.615 [2024-07-24 22:30:18.720263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.615 qpair failed and we were unable to recover it. 00:31:23.615 [2024-07-24 22:30:18.730100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.615 [2024-07-24 22:30:18.730255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.615 [2024-07-24 22:30:18.730275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.615 [2024-07-24 22:30:18.730282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.615 [2024-07-24 22:30:18.730288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.615 [2024-07-24 22:30:18.730305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.615 qpair failed and we were unable to recover it. 00:31:23.615 [2024-07-24 22:30:18.740180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.615 [2024-07-24 22:30:18.740320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.615 [2024-07-24 22:30:18.740340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.615 [2024-07-24 22:30:18.740347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.615 [2024-07-24 22:30:18.740353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.615 [2024-07-24 22:30:18.740370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.615 qpair failed and we were unable to recover it. 00:31:23.875 [2024-07-24 22:30:18.750145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.875 [2024-07-24 22:30:18.750330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.875 [2024-07-24 22:30:18.750349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.875 [2024-07-24 22:30:18.750356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.875 [2024-07-24 22:30:18.750363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.875 [2024-07-24 22:30:18.750379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.875 qpair failed and we were unable to recover it. 00:31:23.875 [2024-07-24 22:30:18.760249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.875 [2024-07-24 22:30:18.760387] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.875 [2024-07-24 22:30:18.760410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.875 [2024-07-24 22:30:18.760417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.875 [2024-07-24 22:30:18.760423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.875 [2024-07-24 22:30:18.760440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.875 qpair failed and we were unable to recover it. 00:31:23.875 [2024-07-24 22:30:18.770257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.875 [2024-07-24 22:30:18.770440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.875 [2024-07-24 22:30:18.770459] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.875 [2024-07-24 22:30:18.770466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.875 [2024-07-24 22:30:18.770473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.875 [2024-07-24 22:30:18.770491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.875 qpair failed and we were unable to recover it. 00:31:23.875 [2024-07-24 22:30:18.780321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.875 [2024-07-24 22:30:18.780461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.875 [2024-07-24 22:30:18.780480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.875 [2024-07-24 22:30:18.780487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.875 [2024-07-24 22:30:18.780495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.875 [2024-07-24 22:30:18.780512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.875 qpair failed and we were unable to recover it. 00:31:23.875 [2024-07-24 22:30:18.790322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.875 [2024-07-24 22:30:18.790461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.875 [2024-07-24 22:30:18.790480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.875 [2024-07-24 22:30:18.790487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.875 [2024-07-24 22:30:18.790494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.875 [2024-07-24 22:30:18.790511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.875 qpair failed and we were unable to recover it. 00:31:23.875 [2024-07-24 22:30:18.800371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.875 [2024-07-24 22:30:18.800517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.875 [2024-07-24 22:30:18.800536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.875 [2024-07-24 22:30:18.800543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.875 [2024-07-24 22:30:18.800552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.875 [2024-07-24 22:30:18.800570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.875 qpair failed and we were unable to recover it. 00:31:23.875 [2024-07-24 22:30:18.810434] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.875 [2024-07-24 22:30:18.810581] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.875 [2024-07-24 22:30:18.810600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.875 [2024-07-24 22:30:18.810607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.810614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.810631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.820374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.820514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.820534] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.820541] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.820547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.820563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.830432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.830598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.830618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.830625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.830631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.830648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.840406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.840545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.840565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.840573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.840581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.840598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.850527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.850719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.850738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.850745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.850751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.850768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.860533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.860670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.860689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.860696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.860703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.860720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.870554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.870694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.870714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.870720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.870727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.870744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.880601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.880750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.880770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.880778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.880785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.880802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.890620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.890756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.890776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.890783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.890794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.890811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.900649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.900788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.900807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.900815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.900821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.900838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.910679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.910826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.910845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.910853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.910860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.910877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.920700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.920846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.920865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.920872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.920879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.876 [2024-07-24 22:30:18.920896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.876 qpair failed and we were unable to recover it. 00:31:23.876 [2024-07-24 22:30:18.930786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.876 [2024-07-24 22:30:18.930921] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.876 [2024-07-24 22:30:18.930940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.876 [2024-07-24 22:30:18.930947] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.876 [2024-07-24 22:30:18.930954] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.877 [2024-07-24 22:30:18.930971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.877 qpair failed and we were unable to recover it. 00:31:23.877 [2024-07-24 22:30:18.940772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.877 [2024-07-24 22:30:18.940914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.877 [2024-07-24 22:30:18.940933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.877 [2024-07-24 22:30:18.940941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.877 [2024-07-24 22:30:18.940947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.877 [2024-07-24 22:30:18.940964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.877 qpair failed and we were unable to recover it. 00:31:23.877 [2024-07-24 22:30:18.950805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.877 [2024-07-24 22:30:18.950991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.877 [2024-07-24 22:30:18.951010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.877 [2024-07-24 22:30:18.951017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.877 [2024-07-24 22:30:18.951024] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.877 [2024-07-24 22:30:18.951040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.877 qpair failed and we were unable to recover it. 00:31:23.877 [2024-07-24 22:30:18.960813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.877 [2024-07-24 22:30:18.960953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.877 [2024-07-24 22:30:18.960972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.877 [2024-07-24 22:30:18.960979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.877 [2024-07-24 22:30:18.960986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.877 [2024-07-24 22:30:18.961002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.877 qpair failed and we were unable to recover it. 00:31:23.877 [2024-07-24 22:30:18.970793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.877 [2024-07-24 22:30:18.970973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.877 [2024-07-24 22:30:18.970993] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.877 [2024-07-24 22:30:18.971000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.877 [2024-07-24 22:30:18.971007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.877 [2024-07-24 22:30:18.971024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.877 qpair failed and we were unable to recover it. 00:31:23.877 [2024-07-24 22:30:18.980854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.877 [2024-07-24 22:30:18.980988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.877 [2024-07-24 22:30:18.981008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.877 [2024-07-24 22:30:18.981015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.877 [2024-07-24 22:30:18.981025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.877 [2024-07-24 22:30:18.981048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.877 qpair failed and we were unable to recover it. 00:31:23.877 [2024-07-24 22:30:18.990932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.877 [2024-07-24 22:30:18.991079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.877 [2024-07-24 22:30:18.991098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.877 [2024-07-24 22:30:18.991105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.877 [2024-07-24 22:30:18.991112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.877 [2024-07-24 22:30:18.991129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.877 qpair failed and we were unable to recover it. 00:31:23.877 [2024-07-24 22:30:19.000951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:23.877 [2024-07-24 22:30:19.001093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:23.877 [2024-07-24 22:30:19.001113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:23.877 [2024-07-24 22:30:19.001120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:23.877 [2024-07-24 22:30:19.001127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:23.877 [2024-07-24 22:30:19.001145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:23.877 qpair failed and we were unable to recover it. 00:31:24.137 [2024-07-24 22:30:19.011006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.011159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.011179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.011186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.011193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.011210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.021058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.021206] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.021226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.021233] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.021240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.021257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.031069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.031219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.031239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.031246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.031252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.031269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.041088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.041227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.041247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.041255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.041262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.041279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.051169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.051308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.051327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.051333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.051340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.051358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.061236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.061407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.061426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.061433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.061440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.061457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.071192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.071334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.071353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.071360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.071370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.071387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.081230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.081366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.081387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.081394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.081401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.081418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.091249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.091385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.091404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.091412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.091419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.091436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.101289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.101434] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.101452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.101459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.138 [2024-07-24 22:30:19.101465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.138 [2024-07-24 22:30:19.101482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.138 qpair failed and we were unable to recover it. 00:31:24.138 [2024-07-24 22:30:19.111318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.138 [2024-07-24 22:30:19.111456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.138 [2024-07-24 22:30:19.111476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.138 [2024-07-24 22:30:19.111483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.111490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.111507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.121331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.121467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.121487] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.121494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.121501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.121518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.131361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.131500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.131519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.131526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.131533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.131550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.141402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.141547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.141567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.141573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.141580] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.141597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.151347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.151488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.151507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.151514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.151521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.151539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.161464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.161604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.161623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.161634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.161640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.161658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.171485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.171625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.171644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.171652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.171659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.171676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.181515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.181661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.181681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.181688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.181695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.181712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.191527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.191670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.191689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.191696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.191703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.191720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.201562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.201702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.201722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.201729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.201736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.201752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.211592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.211735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.211755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.211762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.211769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.211787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.221713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.221864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.221884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.221892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.221898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.221915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.231612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.231756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.231775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.231782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.231789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.231806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.139 [2024-07-24 22:30:19.241721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.139 [2024-07-24 22:30:19.241865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.139 [2024-07-24 22:30:19.241884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.139 [2024-07-24 22:30:19.241891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.139 [2024-07-24 22:30:19.241898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.139 [2024-07-24 22:30:19.241915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.139 qpair failed and we were unable to recover it. 00:31:24.140 [2024-07-24 22:30:19.251723] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.140 [2024-07-24 22:30:19.251865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.140 [2024-07-24 22:30:19.251884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.140 [2024-07-24 22:30:19.251897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.140 [2024-07-24 22:30:19.251905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.140 [2024-07-24 22:30:19.251922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.140 qpair failed and we were unable to recover it. 00:31:24.140 [2024-07-24 22:30:19.261736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.140 [2024-07-24 22:30:19.261871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.140 [2024-07-24 22:30:19.261891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.140 [2024-07-24 22:30:19.261898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.140 [2024-07-24 22:30:19.261905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.140 [2024-07-24 22:30:19.261922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.140 qpair failed and we were unable to recover it. 00:31:24.400 [2024-07-24 22:30:19.271771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.400 [2024-07-24 22:30:19.271914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.400 [2024-07-24 22:30:19.271933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.400 [2024-07-24 22:30:19.271941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.400 [2024-07-24 22:30:19.271947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.400 [2024-07-24 22:30:19.271964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.400 qpair failed and we were unable to recover it. 00:31:24.400 [2024-07-24 22:30:19.281796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.400 [2024-07-24 22:30:19.281938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.400 [2024-07-24 22:30:19.281958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.400 [2024-07-24 22:30:19.281965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.400 [2024-07-24 22:30:19.281972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.400 [2024-07-24 22:30:19.281990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.400 qpair failed and we were unable to recover it. 00:31:24.400 [2024-07-24 22:30:19.291837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.400 [2024-07-24 22:30:19.291972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.400 [2024-07-24 22:30:19.291992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.400 [2024-07-24 22:30:19.291999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.400 [2024-07-24 22:30:19.292005] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.400 [2024-07-24 22:30:19.292023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.400 qpair failed and we were unable to recover it. 00:31:24.400 [2024-07-24 22:30:19.301871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.400 [2024-07-24 22:30:19.302014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.400 [2024-07-24 22:30:19.302034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.400 [2024-07-24 22:30:19.302046] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.400 [2024-07-24 22:30:19.302053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.400 [2024-07-24 22:30:19.302071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.400 qpair failed and we were unable to recover it. 00:31:24.400 [2024-07-24 22:30:19.311870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.312013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.312032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.312039] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.312053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.312070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.321940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.322082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.322102] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.322109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.322116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.322133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.331936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.332081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.332100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.332107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.332114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.332132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.341989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.342136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.342154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.342165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.342172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.342189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.351987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.352148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.352168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.352175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.352182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.352199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.361960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.362105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.362125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.362132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.362139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.362156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.372073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.372209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.372228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.372236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.372243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.372260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.382115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.382460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.382479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.382486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.382492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.382509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.392117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.392261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.392282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.392289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.392296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.392313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.402113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.402254] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.402273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.402281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.402288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.402305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.412225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.412378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.412399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.412406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.412413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.412430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.422227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.422372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.422392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.401 [2024-07-24 22:30:19.422399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.401 [2024-07-24 22:30:19.422406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.401 [2024-07-24 22:30:19.422423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.401 qpair failed and we were unable to recover it. 00:31:24.401 [2024-07-24 22:30:19.432266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.401 [2024-07-24 22:30:19.432405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.401 [2024-07-24 22:30:19.432425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.432435] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.432441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.432458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.442283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.402 [2024-07-24 22:30:19.442426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.402 [2024-07-24 22:30:19.442445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.442453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.442459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.442476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.452312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.402 [2024-07-24 22:30:19.452447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.402 [2024-07-24 22:30:19.452467] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.452474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.452481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.452497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.462344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.402 [2024-07-24 22:30:19.462488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.402 [2024-07-24 22:30:19.462507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.462514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.462521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.462538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.472369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.402 [2024-07-24 22:30:19.472510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.402 [2024-07-24 22:30:19.472530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.472537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.472543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.472560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.482404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.402 [2024-07-24 22:30:19.482545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.402 [2024-07-24 22:30:19.482566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.482572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.482579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.482597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.492428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.402 [2024-07-24 22:30:19.492571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.402 [2024-07-24 22:30:19.492591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.492598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.492605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.492622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.502460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.402 [2024-07-24 22:30:19.502600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.402 [2024-07-24 22:30:19.502621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.502629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.502636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.502653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.512485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.402 [2024-07-24 22:30:19.512620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.402 [2024-07-24 22:30:19.512640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.512648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.512655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.512672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.522540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.402 [2024-07-24 22:30:19.522677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.402 [2024-07-24 22:30:19.522700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.402 [2024-07-24 22:30:19.522707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.402 [2024-07-24 22:30:19.522714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.402 [2024-07-24 22:30:19.522731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.402 qpair failed and we were unable to recover it. 00:31:24.402 [2024-07-24 22:30:19.532565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.532710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.532729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.532737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.532743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.532760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.542592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.542926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.542946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.542953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.542960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.542977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.552562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.552731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.552751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.552758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.552765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.552782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.562632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.562774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.562794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.562801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.562808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.562824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.572660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.572798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.572818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.572825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.572831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.572849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.582693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.582835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.582854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.582862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.582869] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.582886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.592691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.592833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.592853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.592860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.592867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.592884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.602749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.602894] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.602914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.602921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.602928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.602944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.612775] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.612911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.612934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.612942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.612949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.612966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.622816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.622965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.622985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.622992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.622998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.623015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.632831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.632983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.633003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.633009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.633016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.633034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.642855] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.642996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.643015] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.643022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.643029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.643053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.652873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.653013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.663 [2024-07-24 22:30:19.653034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.663 [2024-07-24 22:30:19.653041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.663 [2024-07-24 22:30:19.653053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.663 [2024-07-24 22:30:19.653074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.663 qpair failed and we were unable to recover it. 00:31:24.663 [2024-07-24 22:30:19.662932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.663 [2024-07-24 22:30:19.663077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.663097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.663104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.663111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.663129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.672937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.673081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.673101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.673108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.673115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.673131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.682974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.683122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.683143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.683150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.683156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.683174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.692981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.693133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.693154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.693162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.693169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.693186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.703034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.703194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.703217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.703225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.703231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.703248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.713067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.713215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.713235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.713242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.713249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.713266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.723077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.723215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.723235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.723242] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.723248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.723265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.733092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.733233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.733253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.733260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.733266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.733284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.743159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.743301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.743321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.743328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.743335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.743357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.753153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.753305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.753326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.753333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.753340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.753356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.763122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.763265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.763284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.763291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.763298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.763316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.773383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.773516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.664 [2024-07-24 22:30:19.773536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.664 [2024-07-24 22:30:19.773544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.664 [2024-07-24 22:30:19.773551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.664 [2024-07-24 22:30:19.773568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.664 qpair failed and we were unable to recover it. 00:31:24.664 [2024-07-24 22:30:19.783251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.664 [2024-07-24 22:30:19.783391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.665 [2024-07-24 22:30:19.783410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.665 [2024-07-24 22:30:19.783418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.665 [2024-07-24 22:30:19.783425] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.665 [2024-07-24 22:30:19.783441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.665 qpair failed and we were unable to recover it. 00:31:24.665 [2024-07-24 22:30:19.793421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.665 [2024-07-24 22:30:19.793599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.665 [2024-07-24 22:30:19.793623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.665 [2024-07-24 22:30:19.793630] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.665 [2024-07-24 22:30:19.793637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.665 [2024-07-24 22:30:19.793654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.665 qpair failed and we were unable to recover it. 00:31:24.925 [2024-07-24 22:30:19.803287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.925 [2024-07-24 22:30:19.803436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.925 [2024-07-24 22:30:19.803456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.925 [2024-07-24 22:30:19.803464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.925 [2024-07-24 22:30:19.803471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.925 [2024-07-24 22:30:19.803487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.925 qpair failed and we were unable to recover it. 00:31:24.925 [2024-07-24 22:30:19.813279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.925 [2024-07-24 22:30:19.813422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.925 [2024-07-24 22:30:19.813442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.925 [2024-07-24 22:30:19.813449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.925 [2024-07-24 22:30:19.813456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.925 [2024-07-24 22:30:19.813473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.925 qpair failed and we were unable to recover it. 00:31:24.925 [2024-07-24 22:30:19.823441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.925 [2024-07-24 22:30:19.823589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.925 [2024-07-24 22:30:19.823609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.925 [2024-07-24 22:30:19.823616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.925 [2024-07-24 22:30:19.823623] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.925 [2024-07-24 22:30:19.823640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.925 qpair failed and we were unable to recover it. 00:31:24.925 [2024-07-24 22:30:19.833423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.925 [2024-07-24 22:30:19.833569] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.925 [2024-07-24 22:30:19.833590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.925 [2024-07-24 22:30:19.833597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.925 [2024-07-24 22:30:19.833603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.925 [2024-07-24 22:30:19.833624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.925 qpair failed and we were unable to recover it. 00:31:24.925 [2024-07-24 22:30:19.843442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.925 [2024-07-24 22:30:19.843579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.925 [2024-07-24 22:30:19.843599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.925 [2024-07-24 22:30:19.843607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.925 [2024-07-24 22:30:19.843614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.925 [2024-07-24 22:30:19.843631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.925 qpair failed and we were unable to recover it. 00:31:24.925 [2024-07-24 22:30:19.853464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.925 [2024-07-24 22:30:19.853605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.925 [2024-07-24 22:30:19.853625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.925 [2024-07-24 22:30:19.853633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.925 [2024-07-24 22:30:19.853640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.925 [2024-07-24 22:30:19.853657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.925 qpair failed and we were unable to recover it. 00:31:24.925 [2024-07-24 22:30:19.863447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.925 [2024-07-24 22:30:19.863589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.925 [2024-07-24 22:30:19.863609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.925 [2024-07-24 22:30:19.863616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.925 [2024-07-24 22:30:19.863622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.925 [2024-07-24 22:30:19.863639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.925 qpair failed and we were unable to recover it. 00:31:24.925 [2024-07-24 22:30:19.873453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.925 [2024-07-24 22:30:19.873598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.925 [2024-07-24 22:30:19.873617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.925 [2024-07-24 22:30:19.873625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.925 [2024-07-24 22:30:19.873631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.925 [2024-07-24 22:30:19.873649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.925 qpair failed and we were unable to recover it. 00:31:24.925 [2024-07-24 22:30:19.883485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.925 [2024-07-24 22:30:19.883618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.925 [2024-07-24 22:30:19.883641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.883647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.883654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.883671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.893571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.893709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.893737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.893744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.893751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.893768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.903624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.903786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.903806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.903813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.903820] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.903837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.913550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.913693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.913712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.913719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.913726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.913743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.923657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.923797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.923817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.923824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.923831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.923853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.933646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.933788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.933808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.933815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.933822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.933839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.943688] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.943833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.943853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.943860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.943867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.943883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.953759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.953899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.953919] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.953925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.953932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.953948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.963703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.963840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.963860] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.963867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.963873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.963891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.973840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.973973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.973996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.974003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.974010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.974026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.983825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.983962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.983983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.983990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.983996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.984014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:19.993856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:19.993995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:19.994016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:19.994024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:19.994030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:19.994052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:20.003874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:20.004027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:20.004053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:20.004061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:20.004068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.926 [2024-07-24 22:30:20.004085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.926 qpair failed and we were unable to recover it. 00:31:24.926 [2024-07-24 22:30:20.013901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.926 [2024-07-24 22:30:20.014087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.926 [2024-07-24 22:30:20.014107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.926 [2024-07-24 22:30:20.014114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.926 [2024-07-24 22:30:20.014125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.927 [2024-07-24 22:30:20.014142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.927 qpair failed and we were unable to recover it. 00:31:24.927 [2024-07-24 22:30:20.024123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.927 [2024-07-24 22:30:20.024348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.927 [2024-07-24 22:30:20.024369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.927 [2024-07-24 22:30:20.024376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.927 [2024-07-24 22:30:20.024382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.927 [2024-07-24 22:30:20.024398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.927 qpair failed and we were unable to recover it. 00:31:24.927 [2024-07-24 22:30:20.034027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.927 [2024-07-24 22:30:20.034177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.927 [2024-07-24 22:30:20.034199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.927 [2024-07-24 22:30:20.034207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.927 [2024-07-24 22:30:20.034214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.927 [2024-07-24 22:30:20.034232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.927 qpair failed and we were unable to recover it. 00:31:24.927 [2024-07-24 22:30:20.044218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.927 [2024-07-24 22:30:20.044359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.927 [2024-07-24 22:30:20.044379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.927 [2024-07-24 22:30:20.044386] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.927 [2024-07-24 22:30:20.044393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.927 [2024-07-24 22:30:20.044411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.927 qpair failed and we were unable to recover it. 00:31:24.927 [2024-07-24 22:30:20.053986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:24.927 [2024-07-24 22:30:20.054138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:24.927 [2024-07-24 22:30:20.054158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:24.927 [2024-07-24 22:30:20.054165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:24.927 [2024-07-24 22:30:20.054172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:24.927 [2024-07-24 22:30:20.054189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:24.927 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.064079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.064232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.064252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.064260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.064267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.064285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.074100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.074244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.074262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.074270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.074276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.074293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.084116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.084252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.084272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.084279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.084286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.084303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.094114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.094251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.094271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.094279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.094286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.094303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.104156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.104301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.104320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.104327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.104337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.104354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.114175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.114314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.114333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.114340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.114347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.114365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.124208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.124386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.124406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.124413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.124420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.124436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.134190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.134337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.134357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.134364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.134371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.134388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.144276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.144419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.144439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.144446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.144453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.144469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.154286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.154441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.154461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.154468] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.154474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.154491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.164264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.164406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.164426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.187 [2024-07-24 22:30:20.164433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.187 [2024-07-24 22:30:20.164439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.187 [2024-07-24 22:30:20.164457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.187 qpair failed and we were unable to recover it. 00:31:25.187 [2024-07-24 22:30:20.174573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.187 [2024-07-24 22:30:20.174716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.187 [2024-07-24 22:30:20.174735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.174741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.174748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.174765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.184384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.184523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.184543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.184550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.184557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.184574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.194380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.194520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.194539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.194546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.194556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.194574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.204437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.204575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.204594] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.204601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.204608] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.204625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.214426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.214564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.214585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.214592] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.214599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.214617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.224493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.224634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.224655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.224662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.224669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.224686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.234519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.234656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.234675] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.234682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.234689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.234706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.244550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.244692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.244711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.244718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.244726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.244743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.254579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.254725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.254743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.254750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.254758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.254775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.264623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.264762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.264780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.264788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.264794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.264812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.274571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.274707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.274725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.274732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.274739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.274756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.284659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.284811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.284831] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.284838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.284848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.284865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.294684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.294863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.294883] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.294890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.294897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.294915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.304740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.304889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.304909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.304917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.304923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.304940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.188 [2024-07-24 22:30:20.314762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.188 [2024-07-24 22:30:20.314912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.188 [2024-07-24 22:30:20.314931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.188 [2024-07-24 22:30:20.314938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.188 [2024-07-24 22:30:20.314946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.188 [2024-07-24 22:30:20.314963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.188 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.324787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.324966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.324985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.324992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.324999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.325016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.334867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.335040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.335065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.335072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.335079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.335096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.344913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.345057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.345076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.345083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.345090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.345108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.354891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.355049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.355068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.355075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.355082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.355099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.364913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.365072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.365091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.365098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.365104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.365121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.374944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.375123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.375143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.375153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.375160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.375177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.384985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.385143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.385163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.385170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.385177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.385195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.395020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.395230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.395249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.395256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.395262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.395279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.405031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.405166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.405185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.405192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.405199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.405216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.415065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.415203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.415222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.415229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.415236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.415253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.425052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.425194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.425214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.425221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.425228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.425246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.435112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.435262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.435282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.435289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.435296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.449 [2024-07-24 22:30:20.435313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.449 qpair failed and we were unable to recover it. 00:31:25.449 [2024-07-24 22:30:20.445151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.449 [2024-07-24 22:30:20.445299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.449 [2024-07-24 22:30:20.445318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.449 [2024-07-24 22:30:20.445325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.449 [2024-07-24 22:30:20.445332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.445349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.455183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.455331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.455350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.455357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.455364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.455383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.465226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.465554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.465572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.465583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.465590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.465606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.475248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.475388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.475408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.475415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.475422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.475439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.485283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.485417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.485437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.485445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.485451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.485469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.495323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.495473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.495494] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.495502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.495508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.495525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.505336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.505475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.505495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.505502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.505509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.505527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.515322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.515459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.515479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.515486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.515493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.515510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.525400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.525741] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.525760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.525767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.525775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.525790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.535423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.535570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.535590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.535596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.535603] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.535620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.545453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.545600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.545620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.545628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.545634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.545651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.555478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.555627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.555646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.555657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.555663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.555680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.565515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.565687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.565706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.565713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.565720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.565737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.450 [2024-07-24 22:30:20.575528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.450 [2024-07-24 22:30:20.575674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.450 [2024-07-24 22:30:20.575694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.450 [2024-07-24 22:30:20.575702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.450 [2024-07-24 22:30:20.575708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.450 [2024-07-24 22:30:20.575725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.450 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.585567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.585716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.585736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.585743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.585750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.711 [2024-07-24 22:30:20.585767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.711 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.595600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.595757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.595777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.595784] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.595790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.711 [2024-07-24 22:30:20.595807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.711 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.605628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.605768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.605788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.605795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.605802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.711 [2024-07-24 22:30:20.605819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.711 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.615653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.615803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.615823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.615830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.615837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.711 [2024-07-24 22:30:20.615855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.711 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.625709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.625848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.625868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.625875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.625882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.711 [2024-07-24 22:30:20.625899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.711 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.635668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.635814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.635834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.635841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.635847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.711 [2024-07-24 22:30:20.635864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.711 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.645763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.645911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.645930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.645941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.645948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.711 [2024-07-24 22:30:20.645964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.711 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.655841] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.655981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.656001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.656008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.656016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:25.711 [2024-07-24 22:30:20.656032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:25.711 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.665883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.666078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.666109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.666121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.666131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.711 [2024-07-24 22:30:20.666156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.711 qpair failed and we were unable to recover it. 00:31:25.711 [2024-07-24 22:30:20.675813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.711 [2024-07-24 22:30:20.675954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.711 [2024-07-24 22:30:20.675974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.711 [2024-07-24 22:30:20.675981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.711 [2024-07-24 22:30:20.675988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.676006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.685903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.686037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.686062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.686070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.686077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.686095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.695933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.696081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.696101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.696109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.696115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.696133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.706006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.706160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.706180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.706187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.706194] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.706213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.715985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.716132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.716152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.716159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.716166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.716184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.726011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.726152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.726171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.726179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.726186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.726203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.736080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.736230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.736252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.736260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.736266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.736283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.746098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.746239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.746258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.746265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.746272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.746290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.756129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.756270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.756289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.756296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.756302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.756320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.766099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.766236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.766255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.766263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.766270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.766287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.776163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.776304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.776323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.776330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.776337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.776357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.786194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.786332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.786351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.786358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.786365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.786382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.796209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.796342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.796362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.796369] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.796376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.796393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.712 qpair failed and we were unable to recover it. 00:31:25.712 [2024-07-24 22:30:20.806247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.712 [2024-07-24 22:30:20.806388] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.712 [2024-07-24 22:30:20.806407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.712 [2024-07-24 22:30:20.806414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.712 [2024-07-24 22:30:20.806420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.712 [2024-07-24 22:30:20.806438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.713 qpair failed and we were unable to recover it. 00:31:25.713 [2024-07-24 22:30:20.816199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.713 [2024-07-24 22:30:20.816335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.713 [2024-07-24 22:30:20.816354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.713 [2024-07-24 22:30:20.816362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.713 [2024-07-24 22:30:20.816369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.713 [2024-07-24 22:30:20.816387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.713 qpair failed and we were unable to recover it. 00:31:25.713 [2024-07-24 22:30:20.826278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.713 [2024-07-24 22:30:20.826416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.713 [2024-07-24 22:30:20.826439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.713 [2024-07-24 22:30:20.826446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.713 [2024-07-24 22:30:20.826452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.713 [2024-07-24 22:30:20.826469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.713 qpair failed and we were unable to recover it. 00:31:25.713 [2024-07-24 22:30:20.836509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.713 [2024-07-24 22:30:20.836653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.713 [2024-07-24 22:30:20.836672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.713 [2024-07-24 22:30:20.836679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.713 [2024-07-24 22:30:20.836686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.713 [2024-07-24 22:30:20.836703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.713 qpair failed and we were unable to recover it. 00:31:25.973 [2024-07-24 22:30:20.846392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.973 [2024-07-24 22:30:20.846547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.973 [2024-07-24 22:30:20.846567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.973 [2024-07-24 22:30:20.846574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.973 [2024-07-24 22:30:20.846581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.973 [2024-07-24 22:30:20.846598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.973 qpair failed and we were unable to recover it. 00:31:25.973 [2024-07-24 22:30:20.856420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.973 [2024-07-24 22:30:20.856606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.973 [2024-07-24 22:30:20.856625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.973 [2024-07-24 22:30:20.856633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.973 [2024-07-24 22:30:20.856639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.973 [2024-07-24 22:30:20.856656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.973 qpair failed and we were unable to recover it. 00:31:25.973 [2024-07-24 22:30:20.866425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.973 [2024-07-24 22:30:20.866566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.973 [2024-07-24 22:30:20.866586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.973 [2024-07-24 22:30:20.866593] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.973 [2024-07-24 22:30:20.866599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.973 [2024-07-24 22:30:20.866620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.973 qpair failed and we were unable to recover it. 00:31:25.973 [2024-07-24 22:30:20.876435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.973 [2024-07-24 22:30:20.876579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.973 [2024-07-24 22:30:20.876599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.973 [2024-07-24 22:30:20.876607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.973 [2024-07-24 22:30:20.876613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.973 [2024-07-24 22:30:20.876631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.973 qpair failed and we were unable to recover it. 00:31:25.973 [2024-07-24 22:30:20.886471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.973 [2024-07-24 22:30:20.886613] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.973 [2024-07-24 22:30:20.886633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.973 [2024-07-24 22:30:20.886641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.973 [2024-07-24 22:30:20.886650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.886669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.896517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.896658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.896676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.896683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.896690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.896708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.906552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.906706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.906725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.906732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.906739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.906757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.916569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.916717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.916736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.916743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.916750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.916768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.926625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.926758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.926777] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.926783] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.926791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.926808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.936626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.936761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.936780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.936787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.936793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.936811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.946668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.946817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.946836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.946844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.946850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.946868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.956670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.956809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.956827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.956835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.956845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.956863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.966706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.966855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.966874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.966881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.966888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.966905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.976719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.976863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.976881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.976888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.976895] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.976912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.986772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.986923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.986942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.986949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.986956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.986974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:20.996799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:20.996940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:20.996959] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:20.996966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:20.996972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:20.996989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:21.006748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:21.006902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:21.006922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:21.006929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:21.006936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:21.006953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.974 [2024-07-24 22:30:21.017085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.974 [2024-07-24 22:30:21.017227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.974 [2024-07-24 22:30:21.017245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.974 [2024-07-24 22:30:21.017252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.974 [2024-07-24 22:30:21.017259] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.974 [2024-07-24 22:30:21.017277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.974 qpair failed and we were unable to recover it. 00:31:25.975 [2024-07-24 22:30:21.026898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.975 [2024-07-24 22:30:21.027040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.975 [2024-07-24 22:30:21.027063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.975 [2024-07-24 22:30:21.027070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.975 [2024-07-24 22:30:21.027077] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.975 [2024-07-24 22:30:21.027094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.975 qpair failed and we were unable to recover it. 00:31:25.975 [2024-07-24 22:30:21.036919] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.975 [2024-07-24 22:30:21.037062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.975 [2024-07-24 22:30:21.037081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.975 [2024-07-24 22:30:21.037088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.975 [2024-07-24 22:30:21.037095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.975 [2024-07-24 22:30:21.037113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.975 qpair failed and we were unable to recover it. 00:31:25.975 [2024-07-24 22:30:21.046986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.975 [2024-07-24 22:30:21.047142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.975 [2024-07-24 22:30:21.047161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.975 [2024-07-24 22:30:21.047172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.975 [2024-07-24 22:30:21.047178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.975 [2024-07-24 22:30:21.047195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.975 qpair failed and we were unable to recover it. 00:31:25.975 [2024-07-24 22:30:21.056970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.975 [2024-07-24 22:30:21.057114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.975 [2024-07-24 22:30:21.057133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.975 [2024-07-24 22:30:21.057141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.975 [2024-07-24 22:30:21.057147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.975 [2024-07-24 22:30:21.057165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.975 qpair failed and we were unable to recover it. 00:31:25.975 [2024-07-24 22:30:21.066987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.975 [2024-07-24 22:30:21.067320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.975 [2024-07-24 22:30:21.067339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.975 [2024-07-24 22:30:21.067346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.975 [2024-07-24 22:30:21.067352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.975 [2024-07-24 22:30:21.067369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.975 qpair failed and we were unable to recover it. 00:31:25.975 [2024-07-24 22:30:21.077038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.975 [2024-07-24 22:30:21.077182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.975 [2024-07-24 22:30:21.077201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.975 [2024-07-24 22:30:21.077208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.975 [2024-07-24 22:30:21.077215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.975 [2024-07-24 22:30:21.077232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.975 qpair failed and we were unable to recover it. 00:31:25.975 [2024-07-24 22:30:21.087069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.975 [2024-07-24 22:30:21.087201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.975 [2024-07-24 22:30:21.087220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.975 [2024-07-24 22:30:21.087227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.975 [2024-07-24 22:30:21.087234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.975 [2024-07-24 22:30:21.087251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.975 qpair failed and we were unable to recover it. 00:31:25.975 [2024-07-24 22:30:21.097097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:25.975 [2024-07-24 22:30:21.097236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:25.975 [2024-07-24 22:30:21.097255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:25.975 [2024-07-24 22:30:21.097262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:25.975 [2024-07-24 22:30:21.097270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:25.975 [2024-07-24 22:30:21.097287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:25.975 qpair failed and we were unable to recover it. 00:31:26.236 [2024-07-24 22:30:21.107115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.236 [2024-07-24 22:30:21.107261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.236 [2024-07-24 22:30:21.107279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.236 [2024-07-24 22:30:21.107285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.236 [2024-07-24 22:30:21.107292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.236 [2024-07-24 22:30:21.107309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.236 qpair failed and we were unable to recover it. 00:31:26.236 [2024-07-24 22:30:21.117124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.236 [2024-07-24 22:30:21.117267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.236 [2024-07-24 22:30:21.117286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.236 [2024-07-24 22:30:21.117293] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.236 [2024-07-24 22:30:21.117300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.236 [2024-07-24 22:30:21.117318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.236 qpair failed and we were unable to recover it. 00:31:26.236 [2024-07-24 22:30:21.127183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.236 [2024-07-24 22:30:21.127323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.236 [2024-07-24 22:30:21.127342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.236 [2024-07-24 22:30:21.127350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.236 [2024-07-24 22:30:21.127357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.236 [2024-07-24 22:30:21.127374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.236 qpair failed and we were unable to recover it. 00:31:26.236 [2024-07-24 22:30:21.137208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.236 [2024-07-24 22:30:21.137358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.236 [2024-07-24 22:30:21.137377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.236 [2024-07-24 22:30:21.137388] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.236 [2024-07-24 22:30:21.137395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.236 [2024-07-24 22:30:21.137412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.236 qpair failed and we were unable to recover it. 00:31:26.236 [2024-07-24 22:30:21.147250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.236 [2024-07-24 22:30:21.147392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.236 [2024-07-24 22:30:21.147410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.236 [2024-07-24 22:30:21.147418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.236 [2024-07-24 22:30:21.147424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.236 [2024-07-24 22:30:21.147442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.236 qpair failed and we were unable to recover it. 00:31:26.236 [2024-07-24 22:30:21.157280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.236 [2024-07-24 22:30:21.157423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.236 [2024-07-24 22:30:21.157442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.236 [2024-07-24 22:30:21.157448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.236 [2024-07-24 22:30:21.157456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.236 [2024-07-24 22:30:21.157473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.236 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.167251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.167579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.167598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.167605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.167612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.167629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.177306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.177443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.177462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.177469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.177476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.177493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.187292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.187432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.187451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.187458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.187465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.187483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.197507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.197641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.197660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.197667] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.197673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.197691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.207405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.207543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.207562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.207569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.207576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.207594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.217367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.217508] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.217526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.217534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.217541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.217558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.227453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.227593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.227615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.227622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.227629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.227646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.237494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.237627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.237646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.237654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.237661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.237679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.247460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.247601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.247620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.247627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.247634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.247651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.257566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.257747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.257766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.257773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.257780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.257797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.267546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.267683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.267701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.267709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.267716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.267737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.277580] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.277915] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.277934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.277941] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.277948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.277964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.287647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.287804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.237 [2024-07-24 22:30:21.287823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.237 [2024-07-24 22:30:21.287830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.237 [2024-07-24 22:30:21.287837] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.237 [2024-07-24 22:30:21.287854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.237 qpair failed and we were unable to recover it. 00:31:26.237 [2024-07-24 22:30:21.297584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.237 [2024-07-24 22:30:21.297724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.238 [2024-07-24 22:30:21.297741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.238 [2024-07-24 22:30:21.297748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.238 [2024-07-24 22:30:21.297754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.238 [2024-07-24 22:30:21.297771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.238 qpair failed and we were unable to recover it. 00:31:26.238 [2024-07-24 22:30:21.307673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.238 [2024-07-24 22:30:21.307811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.238 [2024-07-24 22:30:21.307828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.238 [2024-07-24 22:30:21.307835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.238 [2024-07-24 22:30:21.307841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.238 [2024-07-24 22:30:21.307859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.238 qpair failed and we were unable to recover it. 00:31:26.238 [2024-07-24 22:30:21.317743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.238 [2024-07-24 22:30:21.317889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.238 [2024-07-24 22:30:21.317911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.238 [2024-07-24 22:30:21.317919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.238 [2024-07-24 22:30:21.317926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.238 [2024-07-24 22:30:21.317943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.238 qpair failed and we were unable to recover it. 00:31:26.238 [2024-07-24 22:30:21.327732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.238 [2024-07-24 22:30:21.327869] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.238 [2024-07-24 22:30:21.327887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.238 [2024-07-24 22:30:21.327895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.238 [2024-07-24 22:30:21.327902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.238 [2024-07-24 22:30:21.327920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.238 qpair failed and we were unable to recover it. 00:31:26.238 [2024-07-24 22:30:21.337762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.238 [2024-07-24 22:30:21.337904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.238 [2024-07-24 22:30:21.337922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.238 [2024-07-24 22:30:21.337929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.238 [2024-07-24 22:30:21.337936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.238 [2024-07-24 22:30:21.337953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.238 qpair failed and we were unable to recover it. 00:31:26.238 [2024-07-24 22:30:21.347793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.238 [2024-07-24 22:30:21.347932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.238 [2024-07-24 22:30:21.347951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.238 [2024-07-24 22:30:21.347959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.238 [2024-07-24 22:30:21.347965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.238 [2024-07-24 22:30:21.347983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.238 qpair failed and we were unable to recover it. 00:31:26.238 [2024-07-24 22:30:21.357826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.238 [2024-07-24 22:30:21.357971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.238 [2024-07-24 22:30:21.357990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.238 [2024-07-24 22:30:21.357997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.238 [2024-07-24 22:30:21.358004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.238 [2024-07-24 22:30:21.358025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.238 qpair failed and we were unable to recover it. 00:31:26.238 [2024-07-24 22:30:21.367850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.238 [2024-07-24 22:30:21.368002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.238 [2024-07-24 22:30:21.368021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.238 [2024-07-24 22:30:21.368029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.238 [2024-07-24 22:30:21.368035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.238 [2024-07-24 22:30:21.368060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.238 qpair failed and we were unable to recover it. 00:31:26.498 [2024-07-24 22:30:21.377922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.498 [2024-07-24 22:30:21.378077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.498 [2024-07-24 22:30:21.378097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.498 [2024-07-24 22:30:21.378104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.498 [2024-07-24 22:30:21.378110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.498 [2024-07-24 22:30:21.378127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.498 qpair failed and we were unable to recover it. 00:31:26.498 [2024-07-24 22:30:21.387893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.498 [2024-07-24 22:30:21.388037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.498 [2024-07-24 22:30:21.388061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.498 [2024-07-24 22:30:21.388068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.498 [2024-07-24 22:30:21.388075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.498 [2024-07-24 22:30:21.388092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.498 qpair failed and we were unable to recover it. 00:31:26.498 [2024-07-24 22:30:21.397939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.498 [2024-07-24 22:30:21.398086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.498 [2024-07-24 22:30:21.398105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.498 [2024-07-24 22:30:21.398112] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.498 [2024-07-24 22:30:21.398119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.498 [2024-07-24 22:30:21.398136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.498 qpair failed and we were unable to recover it. 00:31:26.498 [2024-07-24 22:30:21.407979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.498 [2024-07-24 22:30:21.408122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.498 [2024-07-24 22:30:21.408147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.498 [2024-07-24 22:30:21.408154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.498 [2024-07-24 22:30:21.408161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.498 [2024-07-24 22:30:21.408179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.498 qpair failed and we were unable to recover it. 00:31:26.498 [2024-07-24 22:30:21.418010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.498 [2024-07-24 22:30:21.418157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.418176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.418183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.418190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.418208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.428036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.428179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.428199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.428206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.428214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.428231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.437989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.438135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.438155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.438163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.438169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.438187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.448093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.448235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.448254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.448261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.448271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.448289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.458058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.458200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.458218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.458226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.458233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.458250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.468155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.468297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.468316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.468323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.468330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.468347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.478162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.478307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.478326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.478333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.478339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.478357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.488212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.488360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.488378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.488385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.488393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.488410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.498245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.498383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.498402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.498409] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.498415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.498432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.508201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.508339] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.508359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.508366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.508373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.508390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.518228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.518369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.518388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.518396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.518402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.518419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.528540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.528682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.528700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.528708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.528714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.528732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.538359] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.538498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.538516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.538525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.499 [2024-07-24 22:30:21.538535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.499 [2024-07-24 22:30:21.538553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.499 qpair failed and we were unable to recover it. 00:31:26.499 [2024-07-24 22:30:21.548384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.499 [2024-07-24 22:30:21.548521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.499 [2024-07-24 22:30:21.548539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.499 [2024-07-24 22:30:21.548546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.500 [2024-07-24 22:30:21.548553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.500 [2024-07-24 22:30:21.548570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.500 qpair failed and we were unable to recover it. 00:31:26.500 [2024-07-24 22:30:21.558379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.500 [2024-07-24 22:30:21.558519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.500 [2024-07-24 22:30:21.558537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.500 [2024-07-24 22:30:21.558544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.500 [2024-07-24 22:30:21.558551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.500 [2024-07-24 22:30:21.558569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.500 qpair failed and we were unable to recover it. 00:31:26.500 [2024-07-24 22:30:21.568436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.500 [2024-07-24 22:30:21.568577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.500 [2024-07-24 22:30:21.568596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.500 [2024-07-24 22:30:21.568603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.500 [2024-07-24 22:30:21.568610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.500 [2024-07-24 22:30:21.568627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.500 qpair failed and we were unable to recover it. 00:31:26.500 [2024-07-24 22:30:21.578468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.500 [2024-07-24 22:30:21.578604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.500 [2024-07-24 22:30:21.578623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.500 [2024-07-24 22:30:21.578631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.500 [2024-07-24 22:30:21.578638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.500 [2024-07-24 22:30:21.578655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.500 qpair failed and we were unable to recover it. 00:31:26.500 [2024-07-24 22:30:21.588423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.500 [2024-07-24 22:30:21.588749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.500 [2024-07-24 22:30:21.588767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.500 [2024-07-24 22:30:21.588775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.500 [2024-07-24 22:30:21.588781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.500 [2024-07-24 22:30:21.588797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.500 qpair failed and we were unable to recover it. 00:31:26.500 [2024-07-24 22:30:21.598504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.500 [2024-07-24 22:30:21.598646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.500 [2024-07-24 22:30:21.598664] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.500 [2024-07-24 22:30:21.598671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.500 [2024-07-24 22:30:21.598678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.500 [2024-07-24 22:30:21.598695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.500 qpair failed and we were unable to recover it. 00:31:26.500 [2024-07-24 22:30:21.608556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.500 [2024-07-24 22:30:21.608694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.500 [2024-07-24 22:30:21.608712] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.500 [2024-07-24 22:30:21.608719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.500 [2024-07-24 22:30:21.608726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.500 [2024-07-24 22:30:21.608743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.500 qpair failed and we were unable to recover it. 00:31:26.500 [2024-07-24 22:30:21.618582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.500 [2024-07-24 22:30:21.618723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.500 [2024-07-24 22:30:21.618742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.500 [2024-07-24 22:30:21.618749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.500 [2024-07-24 22:30:21.618756] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.500 [2024-07-24 22:30:21.618773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.500 qpair failed and we were unable to recover it. 00:31:26.500 [2024-07-24 22:30:21.628642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.500 [2024-07-24 22:30:21.628794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.500 [2024-07-24 22:30:21.628812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.500 [2024-07-24 22:30:21.628822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.500 [2024-07-24 22:30:21.628829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.500 [2024-07-24 22:30:21.628846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.500 qpair failed and we were unable to recover it. 00:31:26.760 [2024-07-24 22:30:21.638648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.760 [2024-07-24 22:30:21.638794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.760 [2024-07-24 22:30:21.638814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.760 [2024-07-24 22:30:21.638820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.760 [2024-07-24 22:30:21.638827] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.760 [2024-07-24 22:30:21.638844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.760 qpair failed and we were unable to recover it. 00:31:26.760 [2024-07-24 22:30:21.648682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.760 [2024-07-24 22:30:21.648824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.760 [2024-07-24 22:30:21.648843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.760 [2024-07-24 22:30:21.648850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.760 [2024-07-24 22:30:21.648857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.760 [2024-07-24 22:30:21.648874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.760 qpair failed and we were unable to recover it. 00:31:26.760 [2024-07-24 22:30:21.658705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.760 [2024-07-24 22:30:21.658846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.760 [2024-07-24 22:30:21.658864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.760 [2024-07-24 22:30:21.658871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.760 [2024-07-24 22:30:21.658878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.760 [2024-07-24 22:30:21.658895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.760 qpair failed and we were unable to recover it. 00:31:26.760 [2024-07-24 22:30:21.668759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.760 [2024-07-24 22:30:21.668950] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.760 [2024-07-24 22:30:21.668969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.760 [2024-07-24 22:30:21.668976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.760 [2024-07-24 22:30:21.668983] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.760 [2024-07-24 22:30:21.669000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.760 qpair failed and we were unable to recover it. 00:31:26.760 [2024-07-24 22:30:21.678770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.760 [2024-07-24 22:30:21.678916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.760 [2024-07-24 22:30:21.678934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.760 [2024-07-24 22:30:21.678942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.760 [2024-07-24 22:30:21.678949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.760 [2024-07-24 22:30:21.678967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.688803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.688938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.688956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.688963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.688970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.688987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.698838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.698975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.698994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.699001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.699008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.699024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.708865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.709005] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.709025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.709032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.709040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.709062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.718870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.719014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.719036] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.719049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.719057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.719074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.728963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.729112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.729131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.729139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.729146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.729164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.738991] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.739152] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.739170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.739177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.739184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.739201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.748987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.749129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.749148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.749156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.749163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.749181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.759021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.759171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.759190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.759197] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.759203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.759220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.769014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.769157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.769176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.769183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.769189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.769207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.779073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.779211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.779230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.779238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.779244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:26.761 [2024-07-24 22:30:21.779262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.789140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.789324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.789354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.789366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.789375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.761 [2024-07-24 22:30:21.789400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.799135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.799279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.799300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.799307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.799315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.761 [2024-07-24 22:30:21.799333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.809210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.761 [2024-07-24 22:30:21.809364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.761 [2024-07-24 22:30:21.809389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.761 [2024-07-24 22:30:21.809396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.761 [2024-07-24 22:30:21.809403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.761 [2024-07-24 22:30:21.809420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.761 qpair failed and we were unable to recover it. 00:31:26.761 [2024-07-24 22:30:21.819187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.762 [2024-07-24 22:30:21.819328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.762 [2024-07-24 22:30:21.819347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.762 [2024-07-24 22:30:21.819354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.762 [2024-07-24 22:30:21.819361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.762 [2024-07-24 22:30:21.819378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.762 qpair failed and we were unable to recover it. 00:31:26.762 [2024-07-24 22:30:21.829264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.762 [2024-07-24 22:30:21.829409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.762 [2024-07-24 22:30:21.829429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.762 [2024-07-24 22:30:21.829436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.762 [2024-07-24 22:30:21.829443] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.762 [2024-07-24 22:30:21.829460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.762 qpair failed and we were unable to recover it. 00:31:26.762 [2024-07-24 22:30:21.839292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.762 [2024-07-24 22:30:21.839436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.762 [2024-07-24 22:30:21.839455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.762 [2024-07-24 22:30:21.839462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.762 [2024-07-24 22:30:21.839469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.762 [2024-07-24 22:30:21.839486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.762 qpair failed and we were unable to recover it. 00:31:26.762 [2024-07-24 22:30:21.849278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.762 [2024-07-24 22:30:21.849414] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.762 [2024-07-24 22:30:21.849433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.762 [2024-07-24 22:30:21.849440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.762 [2024-07-24 22:30:21.849448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.762 [2024-07-24 22:30:21.849468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.762 qpair failed and we were unable to recover it. 00:31:26.762 [2024-07-24 22:30:21.859319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.762 [2024-07-24 22:30:21.859461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.762 [2024-07-24 22:30:21.859481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.762 [2024-07-24 22:30:21.859488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.762 [2024-07-24 22:30:21.859495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.762 [2024-07-24 22:30:21.859512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.762 qpair failed and we were unable to recover it. 00:31:26.762 [2024-07-24 22:30:21.869386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.762 [2024-07-24 22:30:21.869540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.762 [2024-07-24 22:30:21.869559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.762 [2024-07-24 22:30:21.869566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.762 [2024-07-24 22:30:21.869573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.762 [2024-07-24 22:30:21.869589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.762 qpair failed and we were unable to recover it. 00:31:26.762 [2024-07-24 22:30:21.879369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.762 [2024-07-24 22:30:21.879520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.762 [2024-07-24 22:30:21.879541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.762 [2024-07-24 22:30:21.879548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.762 [2024-07-24 22:30:21.879554] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.762 [2024-07-24 22:30:21.879572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.762 qpair failed and we were unable to recover it. 00:31:26.762 [2024-07-24 22:30:21.889397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:26.762 [2024-07-24 22:30:21.889540] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:26.762 [2024-07-24 22:30:21.889561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:26.762 [2024-07-24 22:30:21.889569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:26.762 [2024-07-24 22:30:21.889575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:26.762 [2024-07-24 22:30:21.889595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:26.762 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.899422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.899574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.899598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.899606] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.899613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.899630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.909409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.909554] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.909574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.909582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.909589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.909606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.919445] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.919588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.919608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.919615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.919622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.919639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.929510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.929660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.929681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.929688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.929695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.929712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.939539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.939680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.939700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.939707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.939714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.939733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.949564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.949707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.949727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.949734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.949741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.949757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.959547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.959695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.959715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.959722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.959729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.959745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.969643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.969797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.969817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.969824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.969831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.969848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.979643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.979781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.979801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.979808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.979814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.979832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.989685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.989821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.989844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.023 [2024-07-24 22:30:21.989852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.023 [2024-07-24 22:30:21.989859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.023 [2024-07-24 22:30:21.989877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.023 qpair failed and we were unable to recover it. 00:31:27.023 [2024-07-24 22:30:21.999714] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.023 [2024-07-24 22:30:21.999859] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.023 [2024-07-24 22:30:21.999878] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:21.999886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:21.999893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:21.999910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.009706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.009850] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.009870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.009877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.009883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.009900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.019794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.019931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.019951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.019958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.019965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.019982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.029835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.029974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.029994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.030001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.030008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.030029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.039880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.040019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.040039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.040054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.040061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.040079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.049906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.050054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.050073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.050081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.050087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.050105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.059931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.060079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.060099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.060106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.060113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.060130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.069959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.070106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.070126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.070133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.070140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.070157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.079961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.080129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.080152] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.080159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.080166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.080183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.090008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.090148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.090168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.090175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.090182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.090200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.100030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.100172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.100193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.100200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.100206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.100223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.110075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.110214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.110233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.110240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.110246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.110263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.120093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.120240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.024 [2024-07-24 22:30:22.120260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.024 [2024-07-24 22:30:22.120268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.024 [2024-07-24 22:30:22.120280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.024 [2024-07-24 22:30:22.120297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.024 qpair failed and we were unable to recover it. 00:31:27.024 [2024-07-24 22:30:22.130106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.024 [2024-07-24 22:30:22.130246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.025 [2024-07-24 22:30:22.130266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.025 [2024-07-24 22:30:22.130273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.025 [2024-07-24 22:30:22.130281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.025 [2024-07-24 22:30:22.130298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.025 qpair failed and we were unable to recover it. 00:31:27.025 [2024-07-24 22:30:22.140200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.025 [2024-07-24 22:30:22.140335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.025 [2024-07-24 22:30:22.140355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.025 [2024-07-24 22:30:22.140362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.025 [2024-07-24 22:30:22.140369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.025 [2024-07-24 22:30:22.140385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.025 qpair failed and we were unable to recover it. 00:31:27.025 [2024-07-24 22:30:22.150192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.025 [2024-07-24 22:30:22.150332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.025 [2024-07-24 22:30:22.150351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.025 [2024-07-24 22:30:22.150358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.025 [2024-07-24 22:30:22.150365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.025 [2024-07-24 22:30:22.150382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.025 qpair failed and we were unable to recover it. 00:31:27.284 [2024-07-24 22:30:22.160204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.284 [2024-07-24 22:30:22.160348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.284 [2024-07-24 22:30:22.160368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.284 [2024-07-24 22:30:22.160375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.284 [2024-07-24 22:30:22.160381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.284 [2024-07-24 22:30:22.160398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.284 qpair failed and we were unable to recover it. 00:31:27.284 [2024-07-24 22:30:22.170300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.284 [2024-07-24 22:30:22.170460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.170480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.170487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.170494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.285 [2024-07-24 22:30:22.170511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.180276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.180419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.180438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.180446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.180452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.285 [2024-07-24 22:30:22.180469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.190297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.190442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.190462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.190469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.190476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.285 [2024-07-24 22:30:22.190493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.200264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.200406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.200425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.200432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.200439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.285 [2024-07-24 22:30:22.200456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.210340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.210478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.210499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.210507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.210518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.285 [2024-07-24 22:30:22.210535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.220578] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.220733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.220753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.220761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.220767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.285 [2024-07-24 22:30:22.220784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.230409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.230583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.230603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.230610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.230617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f7c00 00:31:27.285 [2024-07-24 22:30:22.230635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.240435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.240617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.240641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.240650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.240657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73dc000b90 00:31:27.285 [2024-07-24 22:30:22.240676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.250463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.250608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.250628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.250636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.250642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73dc000b90 00:31:27.285 [2024-07-24 22:30:22.250659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.250756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13056b0 (9): Bad file descriptor 00:31:27.285 [2024-07-24 22:30:22.260583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.260775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.260805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.260817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.260826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:27.285 [2024-07-24 22:30:22.260852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.270558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.270698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.270717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.270725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.270731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73e4000b90 00:31:27.285 [2024-07-24 22:30:22.270749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.280594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.280772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.280802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.280814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.280824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:27.285 [2024-07-24 22:30:22.280849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 [2024-07-24 22:30:22.290603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:27.285 [2024-07-24 22:30:22.290739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:27.285 [2024-07-24 22:30:22.290758] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:27.285 [2024-07-24 22:30:22.290765] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:27.285 [2024-07-24 22:30:22.290772] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f73d4000b90 00:31:27.285 [2024-07-24 22:30:22.290789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:31:27.285 qpair failed and we were unable to recover it. 00:31:27.285 Initializing NVMe Controllers 00:31:27.285 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.285 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:27.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:27.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:27.285 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:27.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:27.286 Initialization complete. Launching workers. 00:31:27.286 Starting thread on core 1 00:31:27.286 Starting thread on core 2 00:31:27.286 Starting thread on core 3 00:31:27.286 Starting thread on core 0 00:31:27.286 22:30:22 -- host/target_disconnect.sh@59 -- # sync 00:31:27.286 00:31:27.286 real 0m11.201s 00:31:27.286 user 0m20.609s 00:31:27.286 sys 0m4.257s 00:31:27.286 22:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.286 22:30:22 -- common/autotest_common.sh@10 -- # set +x 00:31:27.286 ************************************ 00:31:27.286 END TEST nvmf_target_disconnect_tc2 00:31:27.286 ************************************ 00:31:27.286 22:30:22 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:31:27.286 22:30:22 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:27.286 22:30:22 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:31:27.286 22:30:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:27.286 22:30:22 -- nvmf/common.sh@116 -- # sync 00:31:27.286 22:30:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:27.286 22:30:22 -- nvmf/common.sh@119 -- # set +e 00:31:27.286 22:30:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:27.286 22:30:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:27.286 rmmod nvme_tcp 00:31:27.286 rmmod nvme_fabrics 00:31:27.286 rmmod nvme_keyring 00:31:27.286 22:30:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:27.286 22:30:22 -- nvmf/common.sh@123 -- # set -e 00:31:27.286 22:30:22 -- nvmf/common.sh@124 -- # return 0 00:31:27.286 22:30:22 -- nvmf/common.sh@477 -- # '[' -n 3745784 ']' 00:31:27.286 22:30:22 -- nvmf/common.sh@478 -- # killprocess 3745784 00:31:27.286 22:30:22 -- common/autotest_common.sh@926 -- # '[' -z 3745784 ']' 00:31:27.286 22:30:22 -- common/autotest_common.sh@930 -- # kill -0 3745784 00:31:27.286 22:30:22 -- common/autotest_common.sh@931 -- # uname 00:31:27.286 22:30:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:27.286 22:30:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3745784 00:31:27.545 22:30:22 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:31:27.545 22:30:22 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:31:27.545 22:30:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3745784' 00:31:27.545 killing process with pid 3745784 00:31:27.545 22:30:22 -- common/autotest_common.sh@945 -- # kill 3745784 00:31:27.545 22:30:22 -- common/autotest_common.sh@950 -- # wait 3745784 00:31:27.545 22:30:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:31:27.545 22:30:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:27.545 22:30:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:27.545 22:30:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:27.545 22:30:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:27.545 22:30:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.545 22:30:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:27.545 22:30:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.085 22:30:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:30.085 00:31:30.085 real 0m19.015s 00:31:30.085 user 0m47.357s 00:31:30.085 sys 0m8.585s 00:31:30.085 22:30:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.085 22:30:24 -- common/autotest_common.sh@10 -- # set +x 00:31:30.085 ************************************ 00:31:30.085 END TEST nvmf_target_disconnect 00:31:30.085 ************************************ 00:31:30.085 22:30:24 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:31:30.085 22:30:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:30.085 22:30:24 -- common/autotest_common.sh@10 -- # set +x 00:31:30.085 22:30:24 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:31:30.085 00:31:30.085 real 24m36.914s 00:31:30.085 user 66m55.253s 00:31:30.085 sys 6m20.916s 00:31:30.085 22:30:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.085 22:30:24 -- common/autotest_common.sh@10 -- # set +x 00:31:30.085 ************************************ 00:31:30.085 END TEST nvmf_tcp 00:31:30.085 ************************************ 00:31:30.085 22:30:24 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:31:30.085 22:30:24 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:30.085 22:30:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:30.085 22:30:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:30.085 22:30:24 -- common/autotest_common.sh@10 -- # set +x 00:31:30.085 ************************************ 00:31:30.085 START TEST spdkcli_nvmf_tcp 00:31:30.085 ************************************ 00:31:30.085 22:30:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:30.085 * Looking for test storage... 00:31:30.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:31:30.085 22:30:24 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:31:30.085 22:30:24 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:31:30.085 22:30:24 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:31:30.085 22:30:24 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.085 22:30:24 -- nvmf/common.sh@7 -- # uname -s 00:31:30.085 22:30:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.085 22:30:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.085 22:30:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.085 22:30:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.085 22:30:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.085 22:30:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.085 22:30:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.085 22:30:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.085 22:30:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.085 22:30:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.085 22:30:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.085 22:30:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:30.085 22:30:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.085 22:30:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.085 22:30:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.085 22:30:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.085 22:30:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.085 22:30:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.085 22:30:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.086 22:30:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.086 22:30:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.086 22:30:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.086 22:30:24 -- paths/export.sh@5 -- # export PATH 00:31:30.086 22:30:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.086 22:30:24 -- nvmf/common.sh@46 -- # : 0 00:31:30.086 22:30:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:30.086 22:30:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:30.086 22:30:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:30.086 22:30:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.086 22:30:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.086 22:30:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:30.086 22:30:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:30.086 22:30:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:30.086 22:30:24 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:30.086 22:30:24 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:30.086 22:30:24 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:30.086 22:30:24 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:30.086 22:30:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:30.086 22:30:24 -- common/autotest_common.sh@10 -- # set +x 00:31:30.086 22:30:24 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:30.086 22:30:24 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3747322 00:31:30.086 22:30:24 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:30.086 22:30:24 -- spdkcli/common.sh@34 -- # waitforlisten 3747322 00:31:30.086 22:30:24 -- common/autotest_common.sh@819 -- # '[' -z 3747322 ']' 00:31:30.086 22:30:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.086 22:30:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:30.086 22:30:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.086 22:30:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:30.086 22:30:24 -- common/autotest_common.sh@10 -- # set +x 00:31:30.086 [2024-07-24 22:30:24.933634] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:31:30.086 [2024-07-24 22:30:24.933684] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747322 ] 00:31:30.086 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.086 [2024-07-24 22:30:24.986563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:30.086 [2024-07-24 22:30:25.026452] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:30.086 [2024-07-24 22:30:25.026594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.086 [2024-07-24 22:30:25.026597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.655 22:30:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:30.655 22:30:25 -- common/autotest_common.sh@852 -- # return 0 00:31:30.655 22:30:25 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:30.655 22:30:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:30.655 22:30:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.655 22:30:25 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:30.655 22:30:25 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:30.655 22:30:25 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:30.655 22:30:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:30.655 22:30:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.655 22:30:25 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:30.655 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:30.655 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:30.655 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:30.655 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:30.655 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:30.655 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:30.655 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:30.655 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:30.655 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:30.655 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:30.655 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:30.655 ' 00:31:31.225 [2024-07-24 22:30:26.118908] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:33.133 [2024-07-24 22:30:28.154528] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:34.548 [2024-07-24 22:30:29.330522] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:37.087 [2024-07-24 22:30:31.658065] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:38.989 [2024-07-24 22:30:33.712709] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:40.366 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:40.366 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:40.366 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:40.366 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:40.366 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:40.366 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:40.366 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:40.366 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:40.366 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:40.366 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:40.366 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:40.366 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:40.366 22:30:35 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:40.366 22:30:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:40.366 22:30:35 -- common/autotest_common.sh@10 -- # set +x 00:31:40.366 22:30:35 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:40.366 22:30:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:40.366 22:30:35 -- common/autotest_common.sh@10 -- # set +x 00:31:40.366 22:30:35 -- spdkcli/nvmf.sh@69 -- # check_match 00:31:40.366 22:30:35 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:31:40.933 22:30:35 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:40.934 22:30:35 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:40.934 22:30:35 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:40.934 22:30:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:40.934 22:30:35 -- common/autotest_common.sh@10 -- # set +x 00:31:40.934 22:30:35 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:40.934 22:30:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:40.934 22:30:35 -- common/autotest_common.sh@10 -- # set +x 00:31:40.934 22:30:35 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:40.934 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:40.934 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:40.934 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:40.934 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:40.934 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:40.934 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:40.934 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:40.934 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:40.934 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:40.934 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:40.934 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:40.934 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:40.934 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:40.934 ' 00:31:46.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:46.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:46.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:46.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:46.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:46.205 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:46.205 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:46.206 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:46.206 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:46.206 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:46.206 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:46.206 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:46.206 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:46.206 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:46.206 22:30:40 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:46.206 22:30:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:46.206 22:30:40 -- common/autotest_common.sh@10 -- # set +x 00:31:46.206 22:30:40 -- spdkcli/nvmf.sh@90 -- # killprocess 3747322 00:31:46.206 22:30:40 -- common/autotest_common.sh@926 -- # '[' -z 3747322 ']' 00:31:46.206 22:30:40 -- common/autotest_common.sh@930 -- # kill -0 3747322 00:31:46.206 22:30:40 -- common/autotest_common.sh@931 -- # uname 00:31:46.206 22:30:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:46.206 22:30:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3747322 00:31:46.206 22:30:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:46.206 22:30:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:46.206 22:30:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3747322' 00:31:46.206 killing process with pid 3747322 00:31:46.206 22:30:40 -- common/autotest_common.sh@945 -- # kill 3747322 00:31:46.206 [2024-07-24 22:30:40.885439] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:46.206 22:30:40 -- common/autotest_common.sh@950 -- # wait 3747322 00:31:46.206 22:30:41 -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:46.206 22:30:41 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:46.206 22:30:41 -- spdkcli/common.sh@13 -- # '[' -n 3747322 ']' 00:31:46.206 22:30:41 -- spdkcli/common.sh@14 -- # killprocess 3747322 00:31:46.206 22:30:41 -- common/autotest_common.sh@926 -- # '[' -z 3747322 ']' 00:31:46.206 22:30:41 -- common/autotest_common.sh@930 -- # kill -0 3747322 00:31:46.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3747322) - No such process 00:31:46.206 22:30:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3747322 is not found' 00:31:46.206 Process with pid 3747322 is not found 00:31:46.206 22:30:41 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:46.206 22:30:41 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:46.206 22:30:41 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:46.206 00:31:46.206 real 0m16.268s 00:31:46.206 user 0m34.368s 00:31:46.206 sys 0m0.770s 00:31:46.206 22:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:46.206 22:30:41 -- common/autotest_common.sh@10 -- # set +x 00:31:46.206 ************************************ 00:31:46.206 END TEST spdkcli_nvmf_tcp 00:31:46.206 ************************************ 00:31:46.206 22:30:41 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:46.206 22:30:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:46.206 22:30:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:46.206 22:30:41 -- common/autotest_common.sh@10 -- # set +x 00:31:46.206 ************************************ 00:31:46.206 START TEST nvmf_identify_passthru 00:31:46.206 ************************************ 00:31:46.206 22:30:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:46.206 * Looking for test storage... 00:31:46.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.206 22:30:41 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.206 22:30:41 -- nvmf/common.sh@7 -- # uname -s 00:31:46.206 22:30:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.206 22:30:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.206 22:30:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.206 22:30:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.206 22:30:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.206 22:30:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.206 22:30:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.206 22:30:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.206 22:30:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.206 22:30:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.206 22:30:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:46.206 22:30:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:46.206 22:30:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.206 22:30:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.206 22:30:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.206 22:30:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.206 22:30:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.206 22:30:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.206 22:30:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.206 22:30:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.206 22:30:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.206 22:30:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.206 22:30:41 -- paths/export.sh@5 -- # export PATH 00:31:46.206 22:30:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.206 22:30:41 -- nvmf/common.sh@46 -- # : 0 00:31:46.206 22:30:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:46.206 22:30:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:46.206 22:30:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:46.206 22:30:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.206 22:30:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.206 22:30:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:46.206 22:30:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:46.206 22:30:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:46.206 22:30:41 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.206 22:30:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.206 22:30:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.206 22:30:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.206 22:30:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.207 22:30:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.207 22:30:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.207 22:30:41 -- paths/export.sh@5 -- # export PATH 00:31:46.207 22:30:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.207 22:30:41 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:46.207 22:30:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:46.207 22:30:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.207 22:30:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:46.207 22:30:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:46.207 22:30:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:46.207 22:30:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.207 22:30:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:46.207 22:30:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.207 22:30:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:46.207 22:30:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:46.207 22:30:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:46.207 22:30:41 -- common/autotest_common.sh@10 -- # set +x 00:31:51.485 22:30:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:51.485 22:30:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:51.485 22:30:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:51.485 22:30:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:51.485 22:30:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:51.485 22:30:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:51.485 22:30:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:51.485 22:30:46 -- nvmf/common.sh@294 -- # net_devs=() 00:31:51.485 22:30:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:51.485 22:30:46 -- nvmf/common.sh@295 -- # e810=() 00:31:51.485 22:30:46 -- nvmf/common.sh@295 -- # local -ga e810 00:31:51.485 22:30:46 -- nvmf/common.sh@296 -- # x722=() 00:31:51.485 22:30:46 -- nvmf/common.sh@296 -- # local -ga x722 00:31:51.485 22:30:46 -- nvmf/common.sh@297 -- # mlx=() 00:31:51.485 22:30:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:51.485 22:30:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.485 22:30:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:51.485 22:30:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:51.485 22:30:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:51.485 22:30:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:51.485 22:30:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:51.485 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:51.485 22:30:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:51.485 22:30:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:51.485 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:51.485 22:30:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:51.485 22:30:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:51.485 22:30:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.485 22:30:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:51.485 22:30:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.485 22:30:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:51.485 Found net devices under 0000:86:00.0: cvl_0_0 00:31:51.485 22:30:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.485 22:30:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:51.485 22:30:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.485 22:30:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:51.485 22:30:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.485 22:30:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:51.485 Found net devices under 0000:86:00.1: cvl_0_1 00:31:51.485 22:30:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.485 22:30:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:51.485 22:30:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:51.485 22:30:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:51.485 22:30:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:51.485 22:30:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.485 22:30:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.486 22:30:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.486 22:30:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:51.486 22:30:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.486 22:30:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.486 22:30:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:51.486 22:30:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.486 22:30:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.486 22:30:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:51.486 22:30:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:51.486 22:30:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.486 22:30:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.486 22:30:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.486 22:30:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.486 22:30:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:51.486 22:30:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.486 22:30:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.486 22:30:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.486 22:30:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:51.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:31:51.486 00:31:51.486 --- 10.0.0.2 ping statistics --- 00:31:51.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.486 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:31:51.486 22:30:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:31:51.486 00:31:51.486 --- 10.0.0.1 ping statistics --- 00:31:51.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.486 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:31:51.486 22:30:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.486 22:30:46 -- nvmf/common.sh@410 -- # return 0 00:31:51.486 22:30:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:31:51.486 22:30:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.486 22:30:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:51.486 22:30:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:51.486 22:30:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.486 22:30:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:51.486 22:30:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:51.486 22:30:46 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:51.486 22:30:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:51.486 22:30:46 -- common/autotest_common.sh@10 -- # set +x 00:31:51.486 22:30:46 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:51.486 22:30:46 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:51.486 22:30:46 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:51.486 22:30:46 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:51.486 22:30:46 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:51.486 22:30:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:51.486 22:30:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:51.486 22:30:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:51.486 22:30:46 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:51.486 22:30:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:51.486 22:30:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:51.486 22:30:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:51.486 22:30:46 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:31:51.486 22:30:46 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:31:51.486 22:30:46 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:31:51.486 22:30:46 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:51.486 22:30:46 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:31:51.486 22:30:46 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:51.745 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.938 22:30:50 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:31:55.938 22:30:50 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:31:55.938 22:30:50 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:55.938 22:30:50 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:55.938 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.132 22:30:54 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:00.132 22:30:54 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:00.132 22:30:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:00.132 22:30:54 -- common/autotest_common.sh@10 -- # set +x 00:32:00.132 22:30:54 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:00.132 22:30:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:00.132 22:30:54 -- common/autotest_common.sh@10 -- # set +x 00:32:00.132 22:30:54 -- target/identify_passthru.sh@31 -- # nvmfpid=3754426 00:32:00.132 22:30:54 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:00.132 22:30:54 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:00.132 22:30:54 -- target/identify_passthru.sh@35 -- # waitforlisten 3754426 00:32:00.132 22:30:54 -- common/autotest_common.sh@819 -- # '[' -z 3754426 ']' 00:32:00.132 22:30:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.132 22:30:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:00.132 22:30:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.132 22:30:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:00.132 22:30:54 -- common/autotest_common.sh@10 -- # set +x 00:32:00.132 [2024-07-24 22:30:54.917250] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:32:00.132 [2024-07-24 22:30:54.917295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.132 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.132 [2024-07-24 22:30:54.974122] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.132 [2024-07-24 22:30:55.014260] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:00.132 [2024-07-24 22:30:55.014371] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.132 [2024-07-24 22:30:55.014379] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.132 [2024-07-24 22:30:55.014385] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.132 [2024-07-24 22:30:55.014420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.132 [2024-07-24 22:30:55.014519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.132 [2024-07-24 22:30:55.014609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.132 [2024-07-24 22:30:55.014610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.132 22:30:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:00.132 22:30:55 -- common/autotest_common.sh@852 -- # return 0 00:32:00.132 22:30:55 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:00.132 22:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.132 22:30:55 -- common/autotest_common.sh@10 -- # set +x 00:32:00.132 INFO: Log level set to 20 00:32:00.132 INFO: Requests: 00:32:00.132 { 00:32:00.132 "jsonrpc": "2.0", 00:32:00.132 "method": "nvmf_set_config", 00:32:00.132 "id": 1, 00:32:00.132 "params": { 00:32:00.132 "admin_cmd_passthru": { 00:32:00.132 "identify_ctrlr": true 00:32:00.132 } 00:32:00.132 } 00:32:00.132 } 00:32:00.132 00:32:00.132 INFO: response: 00:32:00.132 { 00:32:00.132 "jsonrpc": "2.0", 00:32:00.132 "id": 1, 00:32:00.132 "result": true 00:32:00.132 } 00:32:00.132 00:32:00.132 22:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.132 22:30:55 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:00.132 22:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.132 22:30:55 -- common/autotest_common.sh@10 -- # set +x 00:32:00.132 INFO: Setting log level to 20 00:32:00.132 INFO: Setting log level to 20 00:32:00.132 INFO: Log level set to 20 00:32:00.132 INFO: Log level set to 20 00:32:00.132 INFO: Requests: 00:32:00.132 { 00:32:00.132 "jsonrpc": "2.0", 00:32:00.132 "method": "framework_start_init", 00:32:00.132 "id": 1 00:32:00.132 } 00:32:00.132 00:32:00.132 INFO: Requests: 00:32:00.132 { 00:32:00.132 "jsonrpc": "2.0", 00:32:00.132 "method": "framework_start_init", 00:32:00.132 "id": 1 00:32:00.133 } 00:32:00.133 00:32:00.133 [2024-07-24 22:30:55.139944] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:00.133 INFO: response: 00:32:00.133 { 00:32:00.133 "jsonrpc": "2.0", 00:32:00.133 "id": 1, 00:32:00.133 "result": true 00:32:00.133 } 00:32:00.133 00:32:00.133 INFO: response: 00:32:00.133 { 00:32:00.133 "jsonrpc": "2.0", 00:32:00.133 "id": 1, 00:32:00.133 "result": true 00:32:00.133 } 00:32:00.133 00:32:00.133 22:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.133 22:30:55 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:00.133 22:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.133 22:30:55 -- common/autotest_common.sh@10 -- # set +x 00:32:00.133 INFO: Setting log level to 40 00:32:00.133 INFO: Setting log level to 40 00:32:00.133 INFO: Setting log level to 40 00:32:00.133 [2024-07-24 22:30:55.153412] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.133 22:30:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:00.133 22:30:55 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:00.133 22:30:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:00.133 22:30:55 -- common/autotest_common.sh@10 -- # set +x 00:32:00.133 22:30:55 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:00.133 22:30:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:00.133 22:30:55 -- common/autotest_common.sh@10 -- # set +x 00:32:03.449 Nvme0n1 00:32:03.449 22:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.449 22:30:58 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:03.449 22:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.449 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:32:03.449 22:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.449 22:30:58 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:03.449 22:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.449 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:32:03.449 22:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.449 22:30:58 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.449 22:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.449 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:32:03.450 [2024-07-24 22:30:58.046965] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.450 22:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.450 22:30:58 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:03.450 22:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.450 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:32:03.450 [2024-07-24 22:30:58.054744] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:32:03.450 [ 00:32:03.450 { 00:32:03.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:03.450 "subtype": "Discovery", 00:32:03.450 "listen_addresses": [], 00:32:03.450 "allow_any_host": true, 00:32:03.450 "hosts": [] 00:32:03.450 }, 00:32:03.450 { 00:32:03.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:03.450 "subtype": "NVMe", 00:32:03.450 "listen_addresses": [ 00:32:03.450 { 00:32:03.450 "transport": "TCP", 00:32:03.450 "trtype": "TCP", 00:32:03.450 "adrfam": "IPv4", 00:32:03.450 "traddr": "10.0.0.2", 00:32:03.450 "trsvcid": "4420" 00:32:03.450 } 00:32:03.450 ], 00:32:03.450 "allow_any_host": true, 00:32:03.450 "hosts": [], 00:32:03.450 "serial_number": "SPDK00000000000001", 00:32:03.450 "model_number": "SPDK bdev Controller", 00:32:03.450 "max_namespaces": 1, 00:32:03.450 "min_cntlid": 1, 00:32:03.450 "max_cntlid": 65519, 00:32:03.450 "namespaces": [ 00:32:03.450 { 00:32:03.450 "nsid": 1, 00:32:03.450 "bdev_name": "Nvme0n1", 00:32:03.450 "name": "Nvme0n1", 00:32:03.450 "nguid": "3C22BF649F3B4EEF814E7D20B2DECDC5", 00:32:03.450 "uuid": "3c22bf64-9f3b-4eef-814e-7d20b2decdc5" 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 } 00:32:03.450 ] 00:32:03.450 22:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.450 22:30:58 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:03.450 22:30:58 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:03.450 22:30:58 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:03.450 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.450 22:30:58 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:03.450 22:30:58 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:03.450 22:30:58 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:03.450 22:30:58 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:03.450 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.450 22:30:58 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:03.450 22:30:58 -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:03.450 22:30:58 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:03.450 22:30:58 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:03.450 22:30:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:03.450 22:30:58 -- common/autotest_common.sh@10 -- # set +x 00:32:03.450 22:30:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:03.450 22:30:58 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:03.450 22:30:58 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:03.450 22:30:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:03.450 22:30:58 -- nvmf/common.sh@116 -- # sync 00:32:03.450 22:30:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:03.450 22:30:58 -- nvmf/common.sh@119 -- # set +e 00:32:03.450 22:30:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:03.450 22:30:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:03.450 rmmod nvme_tcp 00:32:03.450 rmmod nvme_fabrics 00:32:03.450 rmmod nvme_keyring 00:32:03.450 22:30:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:03.450 22:30:58 -- nvmf/common.sh@123 -- # set -e 00:32:03.450 22:30:58 -- nvmf/common.sh@124 -- # return 0 00:32:03.450 22:30:58 -- nvmf/common.sh@477 -- # '[' -n 3754426 ']' 00:32:03.450 22:30:58 -- nvmf/common.sh@478 -- # killprocess 3754426 00:32:03.450 22:30:58 -- common/autotest_common.sh@926 -- # '[' -z 3754426 ']' 00:32:03.450 22:30:58 -- common/autotest_common.sh@930 -- # kill -0 3754426 00:32:03.450 22:30:58 -- common/autotest_common.sh@931 -- # uname 00:32:03.450 22:30:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:03.450 22:30:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3754426 00:32:03.450 22:30:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:03.450 22:30:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:03.450 22:30:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3754426' 00:32:03.450 killing process with pid 3754426 00:32:03.450 22:30:58 -- common/autotest_common.sh@945 -- # kill 3754426 00:32:03.450 [2024-07-24 22:30:58.568225] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:32:03.450 22:30:58 -- common/autotest_common.sh@950 -- # wait 3754426 00:32:05.356 22:31:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:32:05.356 22:31:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:05.356 22:31:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:05.356 22:31:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:05.356 22:31:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:05.356 22:31:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:05.356 22:31:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:05.356 22:31:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.263 22:31:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:07.263 00:32:07.263 real 0m21.006s 00:32:07.263 user 0m27.289s 00:32:07.263 sys 0m4.707s 00:32:07.263 22:31:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:07.263 22:31:02 -- common/autotest_common.sh@10 -- # set +x 00:32:07.263 ************************************ 00:32:07.263 END TEST nvmf_identify_passthru 00:32:07.263 ************************************ 00:32:07.263 22:31:02 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:07.263 22:31:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:07.263 22:31:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:07.263 22:31:02 -- common/autotest_common.sh@10 -- # set +x 00:32:07.263 ************************************ 00:32:07.263 START TEST nvmf_dif 00:32:07.263 ************************************ 00:32:07.263 22:31:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:32:07.263 * Looking for test storage... 00:32:07.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:07.263 22:31:02 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:07.263 22:31:02 -- nvmf/common.sh@7 -- # uname -s 00:32:07.263 22:31:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:07.263 22:31:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:07.263 22:31:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:07.263 22:31:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:07.263 22:31:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:07.263 22:31:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:07.263 22:31:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:07.263 22:31:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:07.263 22:31:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:07.263 22:31:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:07.263 22:31:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:07.263 22:31:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:07.263 22:31:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:07.263 22:31:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:07.263 22:31:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:07.263 22:31:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:07.263 22:31:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:07.263 22:31:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.263 22:31:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.263 22:31:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.263 22:31:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.263 22:31:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.263 22:31:02 -- paths/export.sh@5 -- # export PATH 00:32:07.264 22:31:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.264 22:31:02 -- nvmf/common.sh@46 -- # : 0 00:32:07.264 22:31:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:07.264 22:31:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:07.264 22:31:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:07.264 22:31:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:07.264 22:31:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:07.264 22:31:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:07.264 22:31:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:07.264 22:31:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:07.264 22:31:02 -- target/dif.sh@15 -- # NULL_META=16 00:32:07.264 22:31:02 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:32:07.264 22:31:02 -- target/dif.sh@15 -- # NULL_SIZE=64 00:32:07.264 22:31:02 -- target/dif.sh@15 -- # NULL_DIF=1 00:32:07.264 22:31:02 -- target/dif.sh@135 -- # nvmftestinit 00:32:07.264 22:31:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:07.264 22:31:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:07.264 22:31:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:07.264 22:31:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:07.264 22:31:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:07.264 22:31:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.264 22:31:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:07.264 22:31:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:07.264 22:31:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:07.264 22:31:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:07.264 22:31:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:07.264 22:31:02 -- common/autotest_common.sh@10 -- # set +x 00:32:12.536 22:31:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:12.536 22:31:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:12.536 22:31:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:12.536 22:31:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:12.536 22:31:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:12.536 22:31:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:12.536 22:31:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:12.536 22:31:07 -- nvmf/common.sh@294 -- # net_devs=() 00:32:12.536 22:31:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:12.536 22:31:07 -- nvmf/common.sh@295 -- # e810=() 00:32:12.536 22:31:07 -- nvmf/common.sh@295 -- # local -ga e810 00:32:12.536 22:31:07 -- nvmf/common.sh@296 -- # x722=() 00:32:12.536 22:31:07 -- nvmf/common.sh@296 -- # local -ga x722 00:32:12.536 22:31:07 -- nvmf/common.sh@297 -- # mlx=() 00:32:12.536 22:31:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:12.536 22:31:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.536 22:31:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.536 22:31:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.536 22:31:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.536 22:31:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.536 22:31:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.536 22:31:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.536 22:31:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.536 22:31:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.537 22:31:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.537 22:31:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.537 22:31:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:12.537 22:31:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:12.537 22:31:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:12.537 22:31:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:12.537 22:31:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:12.537 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:12.537 22:31:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:12.537 22:31:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:12.537 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:12.537 22:31:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:12.537 22:31:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:12.537 22:31:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.537 22:31:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:12.537 22:31:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.537 22:31:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:12.537 Found net devices under 0000:86:00.0: cvl_0_0 00:32:12.537 22:31:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.537 22:31:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:12.537 22:31:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.537 22:31:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:12.537 22:31:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.537 22:31:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:12.537 Found net devices under 0000:86:00.1: cvl_0_1 00:32:12.537 22:31:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.537 22:31:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:12.537 22:31:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:12.537 22:31:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:12.537 22:31:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:12.537 22:31:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:12.537 22:31:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.537 22:31:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:12.537 22:31:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:12.537 22:31:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:12.537 22:31:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:12.537 22:31:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:12.537 22:31:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:12.537 22:31:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.537 22:31:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:12.537 22:31:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:12.537 22:31:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:12.537 22:31:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:12.537 22:31:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:12.537 22:31:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:12.537 22:31:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:12.537 22:31:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:12.537 22:31:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:12.537 22:31:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:12.537 22:31:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:12.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:32:12.537 00:32:12.537 --- 10.0.0.2 ping statistics --- 00:32:12.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.537 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:32:12.537 22:31:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:12.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:32:12.537 00:32:12.537 --- 10.0.0.1 ping statistics --- 00:32:12.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.537 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:32:12.537 22:31:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.537 22:31:07 -- nvmf/common.sh@410 -- # return 0 00:32:12.537 22:31:07 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:12.537 22:31:07 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:15.076 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:15.076 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:32:15.076 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:32:15.076 22:31:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.076 22:31:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:15.076 22:31:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:15.076 22:31:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.076 22:31:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:15.076 22:31:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:15.076 22:31:10 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:32:15.076 22:31:10 -- target/dif.sh@137 -- # nvmfappstart 00:32:15.076 22:31:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:15.076 22:31:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:15.076 22:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:15.076 22:31:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:32:15.076 22:31:10 -- nvmf/common.sh@469 -- # nvmfpid=3759736 00:32:15.076 22:31:10 -- nvmf/common.sh@470 -- # waitforlisten 3759736 00:32:15.076 22:31:10 -- common/autotest_common.sh@819 -- # '[' -z 3759736 ']' 00:32:15.076 22:31:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.076 22:31:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:15.076 22:31:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.076 22:31:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:15.076 22:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:15.076 [2024-07-24 22:31:10.121295] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:32:15.076 [2024-07-24 22:31:10.121340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.076 EAL: No free 2048 kB hugepages reported on node 1 00:32:15.076 [2024-07-24 22:31:10.175232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.336 [2024-07-24 22:31:10.215026] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:15.336 [2024-07-24 22:31:10.215138] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.336 [2024-07-24 22:31:10.215146] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.336 [2024-07-24 22:31:10.215153] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.336 [2024-07-24 22:31:10.215170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.904 22:31:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:15.904 22:31:10 -- common/autotest_common.sh@852 -- # return 0 00:32:15.904 22:31:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:15.904 22:31:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:15.904 22:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:15.904 22:31:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:15.904 22:31:10 -- target/dif.sh@139 -- # create_transport 00:32:15.904 22:31:10 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:32:15.904 22:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.904 22:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:15.904 [2024-07-24 22:31:10.960988] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.904 22:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.904 22:31:10 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:32:15.904 22:31:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:15.904 22:31:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:15.904 22:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:15.904 ************************************ 00:32:15.904 START TEST fio_dif_1_default 00:32:15.904 ************************************ 00:32:15.904 22:31:10 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:32:15.904 22:31:10 -- target/dif.sh@86 -- # create_subsystems 0 00:32:15.904 22:31:10 -- target/dif.sh@28 -- # local sub 00:32:15.904 22:31:10 -- target/dif.sh@30 -- # for sub in "$@" 00:32:15.904 22:31:10 -- target/dif.sh@31 -- # create_subsystem 0 00:32:15.904 22:31:10 -- target/dif.sh@18 -- # local sub_id=0 00:32:15.904 22:31:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:15.904 22:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.904 22:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:15.904 bdev_null0 00:32:15.904 22:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.904 22:31:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:15.904 22:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.904 22:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:15.904 22:31:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.904 22:31:10 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:15.904 22:31:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.904 22:31:10 -- common/autotest_common.sh@10 -- # set +x 00:32:15.904 22:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.904 22:31:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:15.904 22:31:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:15.904 22:31:11 -- common/autotest_common.sh@10 -- # set +x 00:32:15.904 [2024-07-24 22:31:11.005220] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:15.904 22:31:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:15.904 22:31:11 -- target/dif.sh@87 -- # fio /dev/fd/62 00:32:15.904 22:31:11 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:32:15.904 22:31:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:15.904 22:31:11 -- nvmf/common.sh@520 -- # config=() 00:32:15.904 22:31:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.904 22:31:11 -- nvmf/common.sh@520 -- # local subsystem config 00:32:15.904 22:31:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:15.904 22:31:11 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:15.904 22:31:11 -- target/dif.sh@82 -- # gen_fio_conf 00:32:15.904 22:31:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:15.904 { 00:32:15.904 "params": { 00:32:15.904 "name": "Nvme$subsystem", 00:32:15.904 "trtype": "$TEST_TRANSPORT", 00:32:15.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:15.904 "adrfam": "ipv4", 00:32:15.904 "trsvcid": "$NVMF_PORT", 00:32:15.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:15.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:15.904 "hdgst": ${hdgst:-false}, 00:32:15.904 "ddgst": ${ddgst:-false} 00:32:15.904 }, 00:32:15.904 "method": "bdev_nvme_attach_controller" 00:32:15.904 } 00:32:15.904 EOF 00:32:15.904 )") 00:32:15.904 22:31:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:15.904 22:31:11 -- target/dif.sh@54 -- # local file 00:32:15.904 22:31:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:15.904 22:31:11 -- target/dif.sh@56 -- # cat 00:32:15.904 22:31:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:15.904 22:31:11 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.905 22:31:11 -- common/autotest_common.sh@1320 -- # shift 00:32:15.905 22:31:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:15.905 22:31:11 -- nvmf/common.sh@542 -- # cat 00:32:15.905 22:31:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:15.905 22:31:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:15.905 22:31:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:15.905 22:31:11 -- target/dif.sh@72 -- # (( file <= files )) 00:32:15.905 22:31:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:15.905 22:31:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:15.905 22:31:11 -- nvmf/common.sh@544 -- # jq . 00:32:15.905 22:31:11 -- nvmf/common.sh@545 -- # IFS=, 00:32:15.905 22:31:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:15.905 "params": { 00:32:15.905 "name": "Nvme0", 00:32:15.905 "trtype": "tcp", 00:32:15.905 "traddr": "10.0.0.2", 00:32:15.905 "adrfam": "ipv4", 00:32:15.905 "trsvcid": "4420", 00:32:15.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:15.905 "hdgst": false, 00:32:15.905 "ddgst": false 00:32:15.905 }, 00:32:15.905 "method": "bdev_nvme_attach_controller" 00:32:15.905 }' 00:32:16.195 22:31:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:16.195 22:31:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:16.195 22:31:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:16.195 22:31:11 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:16.195 22:31:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:16.195 22:31:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:16.195 22:31:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:16.195 22:31:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:16.195 22:31:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:16.195 22:31:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:16.453 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:16.453 fio-3.35 00:32:16.453 Starting 1 thread 00:32:16.453 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.711 [2024-07-24 22:31:11.654196] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:16.711 [2024-07-24 22:31:11.654240] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:26.688 00:32:26.688 filename0: (groupid=0, jobs=1): err= 0: pid=3760115: Wed Jul 24 22:31:21 2024 00:32:26.688 read: IOPS=95, BW=380KiB/s (389kB/s)(3808KiB/10021msec) 00:32:26.688 slat (nsec): min=5860, max=25698, avg=6132.74, stdev=1039.12 00:32:26.688 clat (usec): min=41832, max=44004, avg=42086.69, stdev=320.20 00:32:26.688 lat (usec): min=41838, max=44010, avg=42092.82, stdev=320.32 00:32:26.688 clat percentiles (usec): 00:32:26.688 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:32:26.688 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:32:26.688 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:32:26.688 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:32:26.688 | 99.99th=[43779] 00:32:26.688 bw ( KiB/s): min= 352, max= 384, per=99.74%, avg=379.20, stdev=11.72, samples=20 00:32:26.688 iops : min= 88, max= 96, avg=94.80, stdev= 2.93, samples=20 00:32:26.688 lat (msec) : 50=100.00% 00:32:26.688 cpu : usr=94.94%, sys=4.82%, ctx=8, majf=0, minf=281 00:32:26.688 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:26.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:26.688 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:26.688 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:26.688 00:32:26.688 Run status group 0 (all jobs): 00:32:26.688 READ: bw=380KiB/s (389kB/s), 380KiB/s-380KiB/s (389kB/s-389kB/s), io=3808KiB (3899kB), run=10021-10021msec 00:32:26.948 22:31:21 -- target/dif.sh@88 -- # destroy_subsystems 0 00:32:26.948 22:31:21 -- target/dif.sh@43 -- # local sub 00:32:26.948 22:31:21 -- target/dif.sh@45 -- # for sub in "$@" 00:32:26.948 22:31:21 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:26.948 22:31:21 -- target/dif.sh@36 -- # local sub_id=0 00:32:26.948 22:31:21 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:26.948 22:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:21 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 22:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 22:31:21 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:26.948 22:31:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:21 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 22:31:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 00:32:26.948 real 0m10.997s 00:32:26.948 user 0m15.804s 00:32:26.948 sys 0m0.767s 00:32:26.948 22:31:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:26.948 22:31:21 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 ************************************ 00:32:26.948 END TEST fio_dif_1_default 00:32:26.948 ************************************ 00:32:26.948 22:31:22 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:32:26.948 22:31:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:26.948 22:31:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:26.948 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 ************************************ 00:32:26.948 START TEST fio_dif_1_multi_subsystems 00:32:26.948 ************************************ 00:32:26.948 22:31:22 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:32:26.948 22:31:22 -- target/dif.sh@92 -- # local files=1 00:32:26.948 22:31:22 -- target/dif.sh@94 -- # create_subsystems 0 1 00:32:26.948 22:31:22 -- target/dif.sh@28 -- # local sub 00:32:26.948 22:31:22 -- target/dif.sh@30 -- # for sub in "$@" 00:32:26.948 22:31:22 -- target/dif.sh@31 -- # create_subsystem 0 00:32:26.948 22:31:22 -- target/dif.sh@18 -- # local sub_id=0 00:32:26.948 22:31:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:26.948 22:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 bdev_null0 00:32:26.948 22:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 22:31:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:26.948 22:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 22:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 22:31:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:26.948 22:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 22:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 22:31:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:26.948 22:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 [2024-07-24 22:31:22.041735] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:26.948 22:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 22:31:22 -- target/dif.sh@30 -- # for sub in "$@" 00:32:26.948 22:31:22 -- target/dif.sh@31 -- # create_subsystem 1 00:32:26.948 22:31:22 -- target/dif.sh@18 -- # local sub_id=1 00:32:26.948 22:31:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:26.948 22:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 bdev_null1 00:32:26.948 22:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 22:31:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:26.948 22:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 22:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 22:31:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:26.948 22:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 22:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 22:31:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:26.948 22:31:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:26.948 22:31:22 -- common/autotest_common.sh@10 -- # set +x 00:32:26.948 22:31:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:26.948 22:31:22 -- target/dif.sh@95 -- # fio /dev/fd/62 00:32:26.948 22:31:22 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:32:26.948 22:31:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:26.948 22:31:22 -- nvmf/common.sh@520 -- # config=() 00:32:27.263 22:31:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.263 22:31:22 -- nvmf/common.sh@520 -- # local subsystem config 00:32:27.263 22:31:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:27.263 22:31:22 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.263 22:31:22 -- target/dif.sh@82 -- # gen_fio_conf 00:32:27.263 22:31:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:27.263 { 00:32:27.263 "params": { 00:32:27.263 "name": "Nvme$subsystem", 00:32:27.263 "trtype": "$TEST_TRANSPORT", 00:32:27.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.263 "adrfam": "ipv4", 00:32:27.263 "trsvcid": "$NVMF_PORT", 00:32:27.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.263 "hdgst": ${hdgst:-false}, 00:32:27.263 "ddgst": ${ddgst:-false} 00:32:27.263 }, 00:32:27.263 "method": "bdev_nvme_attach_controller" 00:32:27.263 } 00:32:27.263 EOF 00:32:27.263 )") 00:32:27.263 22:31:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:27.263 22:31:22 -- target/dif.sh@54 -- # local file 00:32:27.263 22:31:22 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:27.263 22:31:22 -- target/dif.sh@56 -- # cat 00:32:27.263 22:31:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:27.263 22:31:22 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.263 22:31:22 -- common/autotest_common.sh@1320 -- # shift 00:32:27.263 22:31:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:27.263 22:31:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.263 22:31:22 -- nvmf/common.sh@542 -- # cat 00:32:27.263 22:31:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:27.263 22:31:22 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.263 22:31:22 -- target/dif.sh@72 -- # (( file <= files )) 00:32:27.263 22:31:22 -- target/dif.sh@73 -- # cat 00:32:27.263 22:31:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:27.263 22:31:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:27.263 22:31:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:27.263 22:31:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:27.263 { 00:32:27.263 "params": { 00:32:27.263 "name": "Nvme$subsystem", 00:32:27.263 "trtype": "$TEST_TRANSPORT", 00:32:27.263 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.263 "adrfam": "ipv4", 00:32:27.263 "trsvcid": "$NVMF_PORT", 00:32:27.263 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.263 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.263 "hdgst": ${hdgst:-false}, 00:32:27.263 "ddgst": ${ddgst:-false} 00:32:27.263 }, 00:32:27.263 "method": "bdev_nvme_attach_controller" 00:32:27.263 } 00:32:27.263 EOF 00:32:27.263 )") 00:32:27.263 22:31:22 -- target/dif.sh@72 -- # (( file++ )) 00:32:27.263 22:31:22 -- target/dif.sh@72 -- # (( file <= files )) 00:32:27.263 22:31:22 -- nvmf/common.sh@542 -- # cat 00:32:27.263 22:31:22 -- nvmf/common.sh@544 -- # jq . 00:32:27.263 22:31:22 -- nvmf/common.sh@545 -- # IFS=, 00:32:27.263 22:31:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:27.263 "params": { 00:32:27.263 "name": "Nvme0", 00:32:27.263 "trtype": "tcp", 00:32:27.263 "traddr": "10.0.0.2", 00:32:27.263 "adrfam": "ipv4", 00:32:27.263 "trsvcid": "4420", 00:32:27.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.263 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.263 "hdgst": false, 00:32:27.263 "ddgst": false 00:32:27.263 }, 00:32:27.263 "method": "bdev_nvme_attach_controller" 00:32:27.263 },{ 00:32:27.263 "params": { 00:32:27.263 "name": "Nvme1", 00:32:27.263 "trtype": "tcp", 00:32:27.263 "traddr": "10.0.0.2", 00:32:27.263 "adrfam": "ipv4", 00:32:27.263 "trsvcid": "4420", 00:32:27.263 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:27.263 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:27.263 "hdgst": false, 00:32:27.263 "ddgst": false 00:32:27.263 }, 00:32:27.263 "method": "bdev_nvme_attach_controller" 00:32:27.263 }' 00:32:27.263 22:31:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:27.263 22:31:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:27.263 22:31:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.263 22:31:22 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:27.263 22:31:22 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:27.263 22:31:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:27.263 22:31:22 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:27.263 22:31:22 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:27.263 22:31:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:27.263 22:31:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.521 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:27.521 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:32:27.521 fio-3.35 00:32:27.521 Starting 2 threads 00:32:27.521 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.088 [2024-07-24 22:31:23.060830] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:28.088 [2024-07-24 22:31:23.060873] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:38.136 00:32:38.136 filename0: (groupid=0, jobs=1): err= 0: pid=3762117: Wed Jul 24 22:31:33 2024 00:32:38.136 read: IOPS=180, BW=722KiB/s (740kB/s)(7232KiB/10014msec) 00:32:38.136 slat (nsec): min=3040, max=21910, avg=6919.12, stdev=1819.52 00:32:38.136 clat (usec): min=1456, max=46238, avg=22133.32, stdev=20244.32 00:32:38.136 lat (usec): min=1462, max=46247, avg=22140.24, stdev=20243.81 00:32:38.136 clat percentiles (usec): 00:32:38.136 | 1.00th=[ 1680], 5.00th=[ 1696], 10.00th=[ 1696], 20.00th=[ 1729], 00:32:38.136 | 30.00th=[ 1778], 40.00th=[ 1827], 50.00th=[41681], 60.00th=[42206], 00:32:38.136 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:38.136 | 99.00th=[43254], 99.50th=[43779], 99.90th=[46400], 99.95th=[46400], 00:32:38.136 | 99.99th=[46400] 00:32:38.136 bw ( KiB/s): min= 672, max= 768, per=49.86%, avg=721.65, stdev=30.19, samples=20 00:32:38.136 iops : min= 168, max= 192, avg=180.40, stdev= 7.56, samples=20 00:32:38.136 lat (msec) : 2=46.74%, 4=3.04%, 50=50.22% 00:32:38.136 cpu : usr=98.05%, sys=1.69%, ctx=10, majf=0, minf=189 00:32:38.136 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.136 issued rwts: total=1808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.136 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:38.136 filename1: (groupid=0, jobs=1): err= 0: pid=3762118: Wed Jul 24 22:31:33 2024 00:32:38.136 read: IOPS=181, BW=724KiB/s (742kB/s)(7248KiB/10006msec) 00:32:38.136 slat (nsec): min=4509, max=22346, avg=6905.22, stdev=1773.62 00:32:38.136 clat (usec): min=1672, max=46236, avg=22066.79, stdev=20248.62 00:32:38.136 lat (usec): min=1678, max=46248, avg=22073.69, stdev=20248.10 00:32:38.136 clat percentiles (usec): 00:32:38.136 | 1.00th=[ 1680], 5.00th=[ 1696], 10.00th=[ 1696], 20.00th=[ 1713], 00:32:38.136 | 30.00th=[ 1795], 40.00th=[ 1827], 50.00th=[41681], 60.00th=[42206], 00:32:38.136 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:32:38.136 | 99.00th=[43254], 99.50th=[43254], 99.90th=[46400], 99.95th=[46400], 00:32:38.136 | 99.99th=[46400] 00:32:38.136 bw ( KiB/s): min= 704, max= 768, per=50.00%, avg=723.30, stdev=30.02, samples=20 00:32:38.136 iops : min= 176, max= 192, avg=180.80, stdev= 7.52, samples=20 00:32:38.136 lat (msec) : 2=49.23%, 4=0.66%, 50=50.11% 00:32:38.136 cpu : usr=98.03%, sys=1.71%, ctx=7, majf=0, minf=128 00:32:38.136 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.136 issued rwts: total=1812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.136 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:38.136 00:32:38.136 Run status group 0 (all jobs): 00:32:38.136 READ: bw=1446KiB/s (1481kB/s), 722KiB/s-724KiB/s (740kB/s-742kB/s), io=14.1MiB (14.8MB), run=10006-10014msec 00:32:38.395 22:31:33 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:38.395 22:31:33 -- target/dif.sh@43 -- # local sub 00:32:38.395 22:31:33 -- target/dif.sh@45 -- # for sub in "$@" 00:32:38.395 22:31:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:38.395 22:31:33 -- target/dif.sh@36 -- # local sub_id=0 00:32:38.395 22:31:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:38.395 22:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 22:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.395 22:31:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:38.395 22:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 22:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.395 22:31:33 -- target/dif.sh@45 -- # for sub in "$@" 00:32:38.395 22:31:33 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:38.395 22:31:33 -- target/dif.sh@36 -- # local sub_id=1 00:32:38.395 22:31:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:38.395 22:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 22:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.395 22:31:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:38.395 22:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 22:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.395 00:32:38.395 real 0m11.366s 00:32:38.395 user 0m26.573s 00:32:38.395 sys 0m0.622s 00:32:38.395 22:31:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 ************************************ 00:32:38.395 END TEST fio_dif_1_multi_subsystems 00:32:38.395 ************************************ 00:32:38.395 22:31:33 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:38.395 22:31:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:38.395 22:31:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 ************************************ 00:32:38.395 START TEST fio_dif_rand_params 00:32:38.395 ************************************ 00:32:38.395 22:31:33 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:32:38.395 22:31:33 -- target/dif.sh@100 -- # local NULL_DIF 00:32:38.395 22:31:33 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:38.395 22:31:33 -- target/dif.sh@103 -- # NULL_DIF=3 00:32:38.395 22:31:33 -- target/dif.sh@103 -- # bs=128k 00:32:38.395 22:31:33 -- target/dif.sh@103 -- # numjobs=3 00:32:38.395 22:31:33 -- target/dif.sh@103 -- # iodepth=3 00:32:38.395 22:31:33 -- target/dif.sh@103 -- # runtime=5 00:32:38.395 22:31:33 -- target/dif.sh@105 -- # create_subsystems 0 00:32:38.395 22:31:33 -- target/dif.sh@28 -- # local sub 00:32:38.395 22:31:33 -- target/dif.sh@30 -- # for sub in "$@" 00:32:38.395 22:31:33 -- target/dif.sh@31 -- # create_subsystem 0 00:32:38.395 22:31:33 -- target/dif.sh@18 -- # local sub_id=0 00:32:38.395 22:31:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:38.395 22:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 bdev_null0 00:32:38.395 22:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.395 22:31:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:38.395 22:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 22:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.395 22:31:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:38.395 22:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 22:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.395 22:31:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:38.395 22:31:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:38.395 22:31:33 -- common/autotest_common.sh@10 -- # set +x 00:32:38.395 [2024-07-24 22:31:33.454048] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.395 22:31:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:38.395 22:31:33 -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:38.395 22:31:33 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:38.395 22:31:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:38.395 22:31:33 -- nvmf/common.sh@520 -- # config=() 00:32:38.395 22:31:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:38.395 22:31:33 -- nvmf/common.sh@520 -- # local subsystem config 00:32:38.395 22:31:33 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:38.395 22:31:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:38.395 22:31:33 -- target/dif.sh@82 -- # gen_fio_conf 00:32:38.395 22:31:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:38.395 { 00:32:38.395 "params": { 00:32:38.395 "name": "Nvme$subsystem", 00:32:38.395 "trtype": "$TEST_TRANSPORT", 00:32:38.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:38.395 "adrfam": "ipv4", 00:32:38.395 "trsvcid": "$NVMF_PORT", 00:32:38.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:38.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:38.395 "hdgst": ${hdgst:-false}, 00:32:38.395 "ddgst": ${ddgst:-false} 00:32:38.395 }, 00:32:38.395 "method": "bdev_nvme_attach_controller" 00:32:38.395 } 00:32:38.395 EOF 00:32:38.395 )") 00:32:38.395 22:31:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:38.395 22:31:33 -- target/dif.sh@54 -- # local file 00:32:38.395 22:31:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:38.395 22:31:33 -- target/dif.sh@56 -- # cat 00:32:38.395 22:31:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:38.395 22:31:33 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:38.395 22:31:33 -- common/autotest_common.sh@1320 -- # shift 00:32:38.395 22:31:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:38.395 22:31:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:38.395 22:31:33 -- nvmf/common.sh@542 -- # cat 00:32:38.395 22:31:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:38.395 22:31:33 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:38.395 22:31:33 -- target/dif.sh@72 -- # (( file <= files )) 00:32:38.395 22:31:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:38.395 22:31:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:38.395 22:31:33 -- nvmf/common.sh@544 -- # jq . 00:32:38.395 22:31:33 -- nvmf/common.sh@545 -- # IFS=, 00:32:38.395 22:31:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:38.395 "params": { 00:32:38.395 "name": "Nvme0", 00:32:38.395 "trtype": "tcp", 00:32:38.395 "traddr": "10.0.0.2", 00:32:38.395 "adrfam": "ipv4", 00:32:38.395 "trsvcid": "4420", 00:32:38.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:38.395 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:38.395 "hdgst": false, 00:32:38.395 "ddgst": false 00:32:38.395 }, 00:32:38.395 "method": "bdev_nvme_attach_controller" 00:32:38.395 }' 00:32:38.395 22:31:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:38.395 22:31:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:38.395 22:31:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:38.395 22:31:33 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:38.395 22:31:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:38.395 22:31:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:38.651 22:31:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:38.651 22:31:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:38.651 22:31:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:38.651 22:31:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:38.908 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:38.908 ... 00:32:38.908 fio-3.35 00:32:38.908 Starting 3 threads 00:32:38.908 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.166 [2024-07-24 22:31:34.090423] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:39.166 [2024-07-24 22:31:34.090470] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:44.428 00:32:44.428 filename0: (groupid=0, jobs=1): err= 0: pid=3764102: Wed Jul 24 22:31:39 2024 00:32:44.428 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(133MiB/5003msec) 00:32:44.428 slat (nsec): min=6215, max=62418, avg=11951.86, stdev=6386.67 00:32:44.428 clat (usec): min=4017, max=61783, avg=14124.00, stdev=14417.71 00:32:44.428 lat (usec): min=4031, max=61819, avg=14135.95, stdev=14418.39 00:32:44.428 clat percentiles (usec): 00:32:44.428 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6456], 20.00th=[ 7111], 00:32:44.428 | 30.00th=[ 7504], 40.00th=[ 8029], 50.00th=[ 8717], 60.00th=[ 9503], 00:32:44.429 | 70.00th=[10552], 80.00th=[13435], 90.00th=[49546], 95.00th=[54264], 00:32:44.429 | 99.00th=[58459], 99.50th=[59507], 99.90th=[61604], 99.95th=[61604], 00:32:44.429 | 99.99th=[61604] 00:32:44.429 bw ( KiB/s): min=18688, max=39936, per=33.73%, avg=27847.11, stdev=7898.52, samples=9 00:32:44.429 iops : min= 146, max= 312, avg=217.56, stdev=61.71, samples=9 00:32:44.429 lat (msec) : 10=64.56%, 20=24.13%, 50=1.79%, 100=9.52% 00:32:44.429 cpu : usr=95.88%, sys=3.32%, ctx=216, majf=0, minf=148 00:32:44.429 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.429 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.429 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.429 filename0: (groupid=0, jobs=1): err= 0: pid=3764103: Wed Jul 24 22:31:39 2024 00:32:44.429 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5014msec) 00:32:44.429 slat (nsec): min=6178, max=33265, avg=10261.50, stdev=5454.37 00:32:44.429 clat (usec): min=5519, max=59786, avg=13987.07, stdev=13789.51 00:32:44.429 lat (usec): min=5526, max=59793, avg=13997.33, stdev=13789.42 00:32:44.429 clat percentiles (usec): 00:32:44.429 | 1.00th=[ 6194], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 7242], 00:32:44.429 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 9110], 60.00th=[ 9896], 00:32:44.429 | 70.00th=[10683], 80.00th=[13042], 90.00th=[49021], 95.00th=[53216], 00:32:44.429 | 99.00th=[56886], 99.50th=[57934], 99.90th=[58983], 99.95th=[60031], 00:32:44.429 | 99.99th=[60031] 00:32:44.429 bw ( KiB/s): min=16929, max=33792, per=33.21%, avg=27420.90, stdev=5087.41, samples=10 00:32:44.429 iops : min= 132, max= 264, avg=214.20, stdev=39.80, samples=10 00:32:44.429 lat (msec) : 10=61.27%, 20=27.84%, 50=1.68%, 100=9.22% 00:32:44.429 cpu : usr=95.77%, sys=3.67%, ctx=6, majf=0, minf=131 00:32:44.429 IO depths : 1=5.6%, 2=94.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.429 issued rwts: total=1074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.429 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.429 filename0: (groupid=0, jobs=1): err= 0: pid=3764104: Wed Jul 24 22:31:39 2024 00:32:44.429 read: IOPS=220, BW=27.6MiB/s (28.9MB/s)(139MiB/5031msec) 00:32:44.429 slat (nsec): min=6113, max=31958, avg=10504.64, stdev=5690.68 00:32:44.429 clat (usec): min=5540, max=91510, avg=13580.06, stdev=13013.80 00:32:44.429 lat (usec): min=5548, max=91529, avg=13590.56, stdev=13013.92 00:32:44.429 clat percentiles (usec): 00:32:44.429 | 1.00th=[ 6128], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 7308], 00:32:44.429 | 30.00th=[ 7898], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[10683], 00:32:44.429 | 70.00th=[11207], 80.00th=[11994], 90.00th=[16057], 95.00th=[52167], 00:32:44.429 | 99.00th=[53740], 99.50th=[54789], 99.90th=[56361], 99.95th=[91751], 00:32:44.429 | 99.99th=[91751] 00:32:44.429 bw ( KiB/s): min=19968, max=46848, per=34.32%, avg=28334.40, stdev=8566.03, samples=10 00:32:44.429 iops : min= 156, max= 366, avg=221.30, stdev=66.96, samples=10 00:32:44.429 lat (msec) : 10=50.36%, 20=39.73%, 50=0.45%, 100=9.46% 00:32:44.429 cpu : usr=95.81%, sys=3.64%, ctx=9, majf=0, minf=129 00:32:44.429 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.429 issued rwts: total=1110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.429 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.429 00:32:44.429 Run status group 0 (all jobs): 00:32:44.429 READ: bw=80.6MiB/s (84.5MB/s), 26.5MiB/s-27.6MiB/s (27.8MB/s-28.9MB/s), io=406MiB (425MB), run=5003-5031msec 00:32:44.429 22:31:39 -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:44.429 22:31:39 -- target/dif.sh@43 -- # local sub 00:32:44.429 22:31:39 -- target/dif.sh@45 -- # for sub in "$@" 00:32:44.429 22:31:39 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:44.429 22:31:39 -- target/dif.sh@36 -- # local sub_id=0 00:32:44.429 22:31:39 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@109 -- # NULL_DIF=2 00:32:44.429 22:31:39 -- target/dif.sh@109 -- # bs=4k 00:32:44.429 22:31:39 -- target/dif.sh@109 -- # numjobs=8 00:32:44.429 22:31:39 -- target/dif.sh@109 -- # iodepth=16 00:32:44.429 22:31:39 -- target/dif.sh@109 -- # runtime= 00:32:44.429 22:31:39 -- target/dif.sh@109 -- # files=2 00:32:44.429 22:31:39 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:44.429 22:31:39 -- target/dif.sh@28 -- # local sub 00:32:44.429 22:31:39 -- target/dif.sh@30 -- # for sub in "$@" 00:32:44.429 22:31:39 -- target/dif.sh@31 -- # create_subsystem 0 00:32:44.429 22:31:39 -- target/dif.sh@18 -- # local sub_id=0 00:32:44.429 22:31:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 bdev_null0 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 [2024-07-24 22:31:39.447265] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@30 -- # for sub in "$@" 00:32:44.429 22:31:39 -- target/dif.sh@31 -- # create_subsystem 1 00:32:44.429 22:31:39 -- target/dif.sh@18 -- # local sub_id=1 00:32:44.429 22:31:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 bdev_null1 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@30 -- # for sub in "$@" 00:32:44.429 22:31:39 -- target/dif.sh@31 -- # create_subsystem 2 00:32:44.429 22:31:39 -- target/dif.sh@18 -- # local sub_id=2 00:32:44.429 22:31:39 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 bdev_null2 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.429 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.429 22:31:39 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:44.429 22:31:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:44.429 22:31:39 -- common/autotest_common.sh@10 -- # set +x 00:32:44.430 22:31:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:44.430 22:31:39 -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:44.430 22:31:39 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:44.430 22:31:39 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:44.430 22:31:39 -- nvmf/common.sh@520 -- # config=() 00:32:44.430 22:31:39 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:44.430 22:31:39 -- nvmf/common.sh@520 -- # local subsystem config 00:32:44.430 22:31:39 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:44.430 22:31:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:44.430 22:31:39 -- target/dif.sh@82 -- # gen_fio_conf 00:32:44.430 22:31:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:44.430 22:31:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:44.430 { 00:32:44.430 "params": { 00:32:44.430 "name": "Nvme$subsystem", 00:32:44.430 "trtype": "$TEST_TRANSPORT", 00:32:44.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:44.430 "adrfam": "ipv4", 00:32:44.430 "trsvcid": "$NVMF_PORT", 00:32:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:44.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:44.430 "hdgst": ${hdgst:-false}, 00:32:44.430 "ddgst": ${ddgst:-false} 00:32:44.430 }, 00:32:44.430 "method": "bdev_nvme_attach_controller" 00:32:44.430 } 00:32:44.430 EOF 00:32:44.430 )") 00:32:44.430 22:31:39 -- target/dif.sh@54 -- # local file 00:32:44.430 22:31:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:44.430 22:31:39 -- target/dif.sh@56 -- # cat 00:32:44.430 22:31:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:44.430 22:31:39 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:44.430 22:31:39 -- common/autotest_common.sh@1320 -- # shift 00:32:44.430 22:31:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:44.430 22:31:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:44.430 22:31:39 -- nvmf/common.sh@542 -- # cat 00:32:44.430 22:31:39 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:44.430 22:31:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:44.430 22:31:39 -- target/dif.sh@72 -- # (( file <= files )) 00:32:44.430 22:31:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:44.430 22:31:39 -- target/dif.sh@73 -- # cat 00:32:44.430 22:31:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:44.430 22:31:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:44.430 22:31:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:44.430 { 00:32:44.430 "params": { 00:32:44.430 "name": "Nvme$subsystem", 00:32:44.430 "trtype": "$TEST_TRANSPORT", 00:32:44.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:44.430 "adrfam": "ipv4", 00:32:44.430 "trsvcid": "$NVMF_PORT", 00:32:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:44.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:44.430 "hdgst": ${hdgst:-false}, 00:32:44.430 "ddgst": ${ddgst:-false} 00:32:44.430 }, 00:32:44.430 "method": "bdev_nvme_attach_controller" 00:32:44.430 } 00:32:44.430 EOF 00:32:44.430 )") 00:32:44.430 22:31:39 -- target/dif.sh@72 -- # (( file++ )) 00:32:44.430 22:31:39 -- target/dif.sh@72 -- # (( file <= files )) 00:32:44.430 22:31:39 -- nvmf/common.sh@542 -- # cat 00:32:44.430 22:31:39 -- target/dif.sh@73 -- # cat 00:32:44.430 22:31:39 -- target/dif.sh@72 -- # (( file++ )) 00:32:44.430 22:31:39 -- target/dif.sh@72 -- # (( file <= files )) 00:32:44.430 22:31:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:44.430 22:31:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:44.430 { 00:32:44.430 "params": { 00:32:44.430 "name": "Nvme$subsystem", 00:32:44.430 "trtype": "$TEST_TRANSPORT", 00:32:44.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:44.430 "adrfam": "ipv4", 00:32:44.430 "trsvcid": "$NVMF_PORT", 00:32:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:44.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:44.430 "hdgst": ${hdgst:-false}, 00:32:44.430 "ddgst": ${ddgst:-false} 00:32:44.430 }, 00:32:44.430 "method": "bdev_nvme_attach_controller" 00:32:44.430 } 00:32:44.430 EOF 00:32:44.430 )") 00:32:44.430 22:31:39 -- nvmf/common.sh@542 -- # cat 00:32:44.430 22:31:39 -- nvmf/common.sh@544 -- # jq . 00:32:44.430 22:31:39 -- nvmf/common.sh@545 -- # IFS=, 00:32:44.430 22:31:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:44.430 "params": { 00:32:44.430 "name": "Nvme0", 00:32:44.430 "trtype": "tcp", 00:32:44.430 "traddr": "10.0.0.2", 00:32:44.430 "adrfam": "ipv4", 00:32:44.430 "trsvcid": "4420", 00:32:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:44.430 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:44.430 "hdgst": false, 00:32:44.430 "ddgst": false 00:32:44.430 }, 00:32:44.430 "method": "bdev_nvme_attach_controller" 00:32:44.430 },{ 00:32:44.430 "params": { 00:32:44.430 "name": "Nvme1", 00:32:44.430 "trtype": "tcp", 00:32:44.430 "traddr": "10.0.0.2", 00:32:44.430 "adrfam": "ipv4", 00:32:44.430 "trsvcid": "4420", 00:32:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:44.430 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:44.430 "hdgst": false, 00:32:44.430 "ddgst": false 00:32:44.430 }, 00:32:44.430 "method": "bdev_nvme_attach_controller" 00:32:44.430 },{ 00:32:44.430 "params": { 00:32:44.430 "name": "Nvme2", 00:32:44.430 "trtype": "tcp", 00:32:44.430 "traddr": "10.0.0.2", 00:32:44.430 "adrfam": "ipv4", 00:32:44.430 "trsvcid": "4420", 00:32:44.430 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:44.430 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:44.430 "hdgst": false, 00:32:44.430 "ddgst": false 00:32:44.430 }, 00:32:44.430 "method": "bdev_nvme_attach_controller" 00:32:44.430 }' 00:32:44.430 22:31:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:44.430 22:31:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:44.430 22:31:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:44.430 22:31:39 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:44.430 22:31:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:44.430 22:31:39 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:44.704 22:31:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:44.704 22:31:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:44.704 22:31:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:44.704 22:31:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:44.961 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:44.961 ... 00:32:44.961 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:44.961 ... 00:32:44.961 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:44.961 ... 00:32:44.961 fio-3.35 00:32:44.961 Starting 24 threads 00:32:44.961 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.526 [2024-07-24 22:31:40.502206] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:45.526 [2024-07-24 22:31:40.502252] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:32:57.714 00:32:57.714 filename0: (groupid=0, jobs=1): err= 0: pid=3765169: Wed Jul 24 22:31:50 2024 00:32:57.715 read: IOPS=617, BW=2469KiB/s (2528kB/s)(24.2MiB/10020msec) 00:32:57.715 slat (nsec): min=4068, max=63091, avg=12518.13, stdev=5731.33 00:32:57.715 clat (usec): min=3810, max=47567, avg=25843.18, stdev=5247.21 00:32:57.715 lat (usec): min=3817, max=47576, avg=25855.70, stdev=5247.94 00:32:57.715 clat percentiles (usec): 00:32:57.715 | 1.00th=[13435], 5.00th=[18482], 10.00th=[21890], 20.00th=[22676], 00:32:57.715 | 30.00th=[23200], 40.00th=[23725], 50.00th=[24249], 60.00th=[24773], 00:32:57.715 | 70.00th=[28967], 80.00th=[30802], 90.00th=[32375], 95.00th=[33817], 00:32:57.715 | 99.00th=[40109], 99.50th=[42730], 99.90th=[46400], 99.95th=[47449], 00:32:57.715 | 99.99th=[47449] 00:32:57.715 bw ( KiB/s): min= 2224, max= 2728, per=4.29%, avg=2467.60, stdev=122.97, samples=20 00:32:57.715 iops : min= 556, max= 682, avg=616.90, stdev=30.74, samples=20 00:32:57.715 lat (msec) : 4=0.13%, 10=0.65%, 20=4.66%, 50=94.57% 00:32:57.715 cpu : usr=98.66%, sys=0.92%, ctx=18, majf=0, minf=61 00:32:57.715 IO depths : 1=0.3%, 2=0.7%, 4=7.2%, 8=78.3%, 16=13.5%, 32=0.0%, >=64=0.0% 00:32:57.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 issued rwts: total=6185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.715 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.715 filename0: (groupid=0, jobs=1): err= 0: pid=3765170: Wed Jul 24 22:31:50 2024 00:32:57.715 read: IOPS=583, BW=2333KiB/s (2388kB/s)(22.8MiB/10027msec) 00:32:57.715 slat (nsec): min=6840, max=73045, avg=15270.39, stdev=7766.84 00:32:57.715 clat (usec): min=4165, max=49238, avg=27325.43, stdev=5821.19 00:32:57.715 lat (usec): min=4176, max=49252, avg=27340.70, stdev=5821.81 00:32:57.715 clat percentiles (usec): 00:32:57.715 | 1.00th=[13566], 5.00th=[20317], 10.00th=[22152], 20.00th=[23200], 00:32:57.715 | 30.00th=[23725], 40.00th=[24511], 50.00th=[26084], 60.00th=[28967], 00:32:57.715 | 70.00th=[30278], 80.00th=[31589], 90.00th=[33817], 95.00th=[36439], 00:32:57.715 | 99.00th=[45876], 99.50th=[46924], 99.90th=[47973], 99.95th=[49021], 00:32:57.715 | 99.99th=[49021] 00:32:57.715 bw ( KiB/s): min= 2168, max= 2520, per=4.06%, avg=2332.40, stdev=112.19, samples=20 00:32:57.715 iops : min= 542, max= 630, avg=583.10, stdev=28.05, samples=20 00:32:57.715 lat (msec) : 10=0.82%, 20=3.98%, 50=95.19% 00:32:57.715 cpu : usr=98.91%, sys=0.70%, ctx=17, majf=0, minf=43 00:32:57.715 IO depths : 1=0.1%, 2=0.7%, 4=7.4%, 8=77.6%, 16=14.2%, 32=0.0%, >=64=0.0% 00:32:57.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 complete : 0=0.0%, 4=90.5%, 8=5.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 issued rwts: total=5847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.715 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.715 filename0: (groupid=0, jobs=1): err= 0: pid=3765171: Wed Jul 24 22:31:50 2024 00:32:57.715 read: IOPS=557, BW=2231KiB/s (2285kB/s)(21.8MiB/10003msec) 00:32:57.715 slat (usec): min=6, max=129, avg=50.79, stdev=28.01 00:32:57.715 clat (usec): min=5283, max=49780, avg=28448.17, stdev=5329.08 00:32:57.715 lat (usec): min=5290, max=49850, avg=28498.96, stdev=5331.39 00:32:57.715 clat percentiles (usec): 00:32:57.715 | 1.00th=[16057], 5.00th=[20055], 10.00th=[22152], 20.00th=[23725], 00:32:57.715 | 30.00th=[25560], 40.00th=[27919], 50.00th=[29230], 60.00th=[30016], 00:32:57.715 | 70.00th=[31065], 80.00th=[32113], 90.00th=[33817], 95.00th=[35914], 00:32:57.715 | 99.00th=[44827], 99.50th=[46400], 99.90th=[49021], 99.95th=[49021], 00:32:57.715 | 99.99th=[49546] 00:32:57.715 bw ( KiB/s): min= 2016, max= 2480, per=3.85%, avg=2212.21, stdev=137.10, samples=19 00:32:57.715 iops : min= 504, max= 620, avg=553.05, stdev=34.28, samples=19 00:32:57.715 lat (msec) : 10=0.52%, 20=4.01%, 50=95.47% 00:32:57.715 cpu : usr=98.64%, sys=0.96%, ctx=13, majf=0, minf=64 00:32:57.715 IO depths : 1=0.1%, 2=0.3%, 4=5.7%, 8=78.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:32:57.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 complete : 0=0.0%, 4=89.8%, 8=6.8%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 issued rwts: total=5580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.715 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.715 filename0: (groupid=0, jobs=1): err= 0: pid=3765172: Wed Jul 24 22:31:50 2024 00:32:57.715 read: IOPS=595, BW=2381KiB/s (2438kB/s)(23.3MiB/10021msec) 00:32:57.715 slat (nsec): min=6832, max=72689, avg=15058.98, stdev=7058.82 00:32:57.715 clat (usec): min=10774, max=65743, avg=26785.98, stdev=5228.64 00:32:57.715 lat (usec): min=10783, max=65765, avg=26801.04, stdev=5228.39 00:32:57.715 clat percentiles (usec): 00:32:57.715 | 1.00th=[14484], 5.00th=[21103], 10.00th=[22414], 20.00th=[23200], 00:32:57.715 | 30.00th=[23725], 40.00th=[24249], 50.00th=[24773], 60.00th=[27919], 00:32:57.715 | 70.00th=[29754], 80.00th=[31327], 90.00th=[32900], 95.00th=[34341], 00:32:57.715 | 99.00th=[44827], 99.50th=[47973], 99.90th=[52691], 99.95th=[52691], 00:32:57.715 | 99.99th=[65799] 00:32:57.715 bw ( KiB/s): min= 2048, max= 2560, per=4.14%, avg=2379.20, stdev=142.17, samples=20 00:32:57.715 iops : min= 512, max= 640, avg=594.80, stdev=35.54, samples=20 00:32:57.715 lat (msec) : 20=4.21%, 50=95.52%, 100=0.27% 00:32:57.715 cpu : usr=98.64%, sys=0.97%, ctx=18, majf=0, minf=74 00:32:57.715 IO depths : 1=0.2%, 2=0.5%, 4=7.9%, 8=78.3%, 16=13.1%, 32=0.0%, >=64=0.0% 00:32:57.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 issued rwts: total=5964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.715 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.715 filename0: (groupid=0, jobs=1): err= 0: pid=3765173: Wed Jul 24 22:31:50 2024 00:32:57.715 read: IOPS=632, BW=2531KiB/s (2592kB/s)(24.7MiB/10007msec) 00:32:57.715 slat (nsec): min=6178, max=72231, avg=15293.34, stdev=7633.22 00:32:57.715 clat (usec): min=6837, max=51561, avg=25200.42, stdev=4503.74 00:32:57.715 lat (usec): min=6854, max=51577, avg=25215.72, stdev=4503.92 00:32:57.715 clat percentiles (usec): 00:32:57.715 | 1.00th=[13698], 5.00th=[20055], 10.00th=[21890], 20.00th=[22676], 00:32:57.715 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24511], 00:32:57.715 | 70.00th=[25297], 80.00th=[29230], 90.00th=[31327], 95.00th=[32900], 00:32:57.715 | 99.00th=[37487], 99.50th=[40633], 99.90th=[47449], 99.95th=[51643], 00:32:57.715 | 99.99th=[51643] 00:32:57.715 bw ( KiB/s): min= 2360, max= 2640, per=4.39%, avg=2524.63, stdev=84.21, samples=19 00:32:57.715 iops : min= 590, max= 660, avg=631.16, stdev=21.05, samples=19 00:32:57.715 lat (msec) : 10=0.47%, 20=4.47%, 50=94.98%, 100=0.08% 00:32:57.715 cpu : usr=98.72%, sys=0.89%, ctx=18, majf=0, minf=46 00:32:57.715 IO depths : 1=0.1%, 2=0.4%, 4=7.1%, 8=79.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:32:57.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 complete : 0=0.0%, 4=89.5%, 8=5.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 issued rwts: total=6333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.715 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.715 filename0: (groupid=0, jobs=1): err= 0: pid=3765174: Wed Jul 24 22:31:50 2024 00:32:57.715 read: IOPS=585, BW=2340KiB/s (2397kB/s)(22.9MiB/10017msec) 00:32:57.715 slat (usec): min=6, max=283, avg=15.71, stdev= 8.17 00:32:57.715 clat (usec): min=12563, max=49343, avg=27258.57, stdev=5325.27 00:32:57.715 lat (usec): min=12576, max=49374, avg=27274.28, stdev=5325.65 00:32:57.715 clat percentiles (usec): 00:32:57.715 | 1.00th=[15533], 5.00th=[21103], 10.00th=[22414], 20.00th=[23200], 00:32:57.715 | 30.00th=[23725], 40.00th=[24249], 50.00th=[25297], 60.00th=[28967], 00:32:57.715 | 70.00th=[30540], 80.00th=[31589], 90.00th=[33424], 95.00th=[35914], 00:32:57.715 | 99.00th=[44827], 99.50th=[46400], 99.90th=[47973], 99.95th=[49021], 00:32:57.715 | 99.99th=[49546] 00:32:57.715 bw ( KiB/s): min= 2184, max= 2616, per=4.07%, avg=2338.00, stdev=108.14, samples=20 00:32:57.715 iops : min= 546, max= 654, avg=584.50, stdev=27.04, samples=20 00:32:57.715 lat (msec) : 20=3.40%, 50=96.60% 00:32:57.715 cpu : usr=96.75%, sys=1.53%, ctx=46, majf=0, minf=70 00:32:57.715 IO depths : 1=0.2%, 2=0.5%, 4=8.0%, 8=77.8%, 16=13.5%, 32=0.0%, >=64=0.0% 00:32:57.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 issued rwts: total=5861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.715 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.715 filename0: (groupid=0, jobs=1): err= 0: pid=3765175: Wed Jul 24 22:31:50 2024 00:32:57.715 read: IOPS=595, BW=2382KiB/s (2439kB/s)(23.3MiB/10010msec) 00:32:57.715 slat (nsec): min=6801, max=72116, avg=14847.86, stdev=7673.55 00:32:57.715 clat (usec): min=9676, max=63381, avg=26782.82, stdev=5421.04 00:32:57.715 lat (usec): min=9683, max=63401, avg=26797.67, stdev=5420.85 00:32:57.715 clat percentiles (usec): 00:32:57.715 | 1.00th=[14222], 5.00th=[20055], 10.00th=[22152], 20.00th=[23200], 00:32:57.715 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24773], 60.00th=[27919], 00:32:57.715 | 70.00th=[30016], 80.00th=[31327], 90.00th=[32900], 95.00th=[34866], 00:32:57.715 | 99.00th=[44827], 99.50th=[46400], 99.90th=[58983], 99.95th=[63177], 00:32:57.715 | 99.99th=[63177] 00:32:57.715 bw ( KiB/s): min= 2176, max= 2536, per=4.14%, avg=2381.89, stdev=98.88, samples=19 00:32:57.715 iops : min= 544, max= 634, avg=595.47, stdev=24.72, samples=19 00:32:57.715 lat (msec) : 10=0.10%, 20=4.66%, 50=94.97%, 100=0.27% 00:32:57.715 cpu : usr=98.90%, sys=0.71%, ctx=14, majf=0, minf=63 00:32:57.715 IO depths : 1=0.2%, 2=0.7%, 4=7.2%, 8=78.6%, 16=13.2%, 32=0.0%, >=64=0.0% 00:32:57.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.715 issued rwts: total=5961,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.715 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.715 filename0: (groupid=0, jobs=1): err= 0: pid=3765176: Wed Jul 24 22:31:50 2024 00:32:57.715 read: IOPS=588, BW=2355KiB/s (2412kB/s)(23.0MiB/10016msec) 00:32:57.715 slat (nsec): min=6751, max=74183, avg=14226.86, stdev=7552.24 00:32:57.715 clat (usec): min=11764, max=52314, avg=27081.47, stdev=5229.67 00:32:57.715 lat (usec): min=11787, max=52331, avg=27095.70, stdev=5229.46 00:32:57.715 clat percentiles (usec): 00:32:57.715 | 1.00th=[15139], 5.00th=[19268], 10.00th=[22152], 20.00th=[22938], 00:32:57.715 | 30.00th=[23725], 40.00th=[24511], 50.00th=[25822], 60.00th=[28705], 00:32:57.715 | 70.00th=[30278], 80.00th=[31327], 90.00th=[32900], 95.00th=[34866], 00:32:57.715 | 99.00th=[43779], 99.50th=[45876], 99.90th=[48497], 99.95th=[48497], 00:32:57.716 | 99.99th=[52167] 00:32:57.716 bw ( KiB/s): min= 2000, max= 2688, per=4.09%, avg=2352.80, stdev=156.70, samples=20 00:32:57.716 iops : min= 500, max= 672, avg=588.20, stdev=39.18, samples=20 00:32:57.716 lat (msec) : 20=5.56%, 50=94.42%, 100=0.02% 00:32:57.716 cpu : usr=98.61%, sys=0.98%, ctx=14, majf=0, minf=75 00:32:57.716 IO depths : 1=0.1%, 2=0.6%, 4=7.3%, 8=77.9%, 16=14.1%, 32=0.0%, >=64=0.0% 00:32:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 complete : 0=0.0%, 4=90.3%, 8=5.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 issued rwts: total=5898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.716 filename1: (groupid=0, jobs=1): err= 0: pid=3765177: Wed Jul 24 22:31:50 2024 00:32:57.716 read: IOPS=605, BW=2420KiB/s (2479kB/s)(23.7MiB/10008msec) 00:32:57.716 slat (nsec): min=6047, max=66275, avg=12278.33, stdev=7217.23 00:32:57.716 clat (usec): min=8378, max=78284, avg=26376.77, stdev=6708.52 00:32:57.716 lat (usec): min=8399, max=78300, avg=26389.05, stdev=6709.67 00:32:57.716 clat percentiles (usec): 00:32:57.716 | 1.00th=[14091], 5.00th=[15533], 10.00th=[16712], 20.00th=[22152], 00:32:57.716 | 30.00th=[23200], 40.00th=[23987], 50.00th=[25560], 60.00th=[28705], 00:32:57.716 | 70.00th=[30016], 80.00th=[31327], 90.00th=[33424], 95.00th=[36439], 00:32:57.716 | 99.00th=[44827], 99.50th=[46924], 99.90th=[64226], 99.95th=[64226], 00:32:57.716 | 99.99th=[78119] 00:32:57.716 bw ( KiB/s): min= 1944, max= 3752, per=4.22%, avg=2427.37, stdev=397.59, samples=19 00:32:57.716 iops : min= 486, max= 938, avg=606.84, stdev=99.40, samples=19 00:32:57.716 lat (msec) : 10=0.20%, 20=16.02%, 50=83.45%, 100=0.33% 00:32:57.716 cpu : usr=98.73%, sys=0.89%, ctx=14, majf=0, minf=77 00:32:57.716 IO depths : 1=0.1%, 2=0.3%, 4=5.5%, 8=79.0%, 16=15.1%, 32=0.0%, >=64=0.0% 00:32:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 complete : 0=0.0%, 4=90.0%, 8=7.3%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 issued rwts: total=6056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.716 filename1: (groupid=0, jobs=1): err= 0: pid=3765178: Wed Jul 24 22:31:50 2024 00:32:57.716 read: IOPS=557, BW=2229KiB/s (2282kB/s)(21.8MiB/10006msec) 00:32:57.716 slat (nsec): min=6060, max=67538, avg=14188.07, stdev=7052.64 00:32:57.716 clat (usec): min=6286, max=68532, avg=28631.29, stdev=4819.75 00:32:57.716 lat (usec): min=6302, max=68549, avg=28645.48, stdev=4818.89 00:32:57.716 clat percentiles (usec): 00:32:57.716 | 1.00th=[15139], 5.00th=[22152], 10.00th=[23200], 20.00th=[23987], 00:32:57.716 | 30.00th=[25297], 40.00th=[28705], 50.00th=[29754], 60.00th=[30540], 00:32:57.716 | 70.00th=[31327], 80.00th=[32113], 90.00th=[32900], 95.00th=[33817], 00:32:57.716 | 99.00th=[38011], 99.50th=[40633], 99.90th=[61080], 99.95th=[68682], 00:32:57.716 | 99.99th=[68682] 00:32:57.716 bw ( KiB/s): min= 2016, max= 2560, per=3.84%, avg=2204.84, stdev=195.55, samples=19 00:32:57.716 iops : min= 504, max= 640, avg=551.21, stdev=48.89, samples=19 00:32:57.716 lat (msec) : 10=0.36%, 20=2.37%, 50=96.99%, 100=0.29% 00:32:57.716 cpu : usr=98.59%, sys=1.02%, ctx=13, majf=0, minf=53 00:32:57.716 IO depths : 1=0.1%, 2=0.1%, 4=13.1%, 8=73.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:32:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 complete : 0=0.0%, 4=92.2%, 8=2.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 issued rwts: total=5575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.716 filename1: (groupid=0, jobs=1): err= 0: pid=3765179: Wed Jul 24 22:31:50 2024 00:32:57.716 read: IOPS=575, BW=2303KiB/s (2358kB/s)(22.5MiB/10022msec) 00:32:57.716 slat (nsec): min=6772, max=73055, avg=14456.49, stdev=7493.27 00:32:57.716 clat (usec): min=11108, max=66358, avg=27711.34, stdev=5540.54 00:32:57.716 lat (usec): min=11120, max=66372, avg=27725.80, stdev=5540.33 00:32:57.716 clat percentiles (usec): 00:32:57.716 | 1.00th=[16450], 5.00th=[21365], 10.00th=[22414], 20.00th=[23200], 00:32:57.716 | 30.00th=[23987], 40.00th=[24511], 50.00th=[27132], 60.00th=[29230], 00:32:57.716 | 70.00th=[30540], 80.00th=[31589], 90.00th=[33424], 95.00th=[36963], 00:32:57.716 | 99.00th=[45876], 99.50th=[47973], 99.90th=[52691], 99.95th=[66323], 00:32:57.716 | 99.99th=[66323] 00:32:57.716 bw ( KiB/s): min= 2016, max= 2496, per=4.00%, avg=2301.20, stdev=107.56, samples=20 00:32:57.716 iops : min= 504, max= 624, avg=575.30, stdev=26.89, samples=20 00:32:57.716 lat (msec) : 20=3.29%, 50=96.33%, 100=0.38% 00:32:57.716 cpu : usr=98.82%, sys=0.79%, ctx=13, majf=0, minf=51 00:32:57.716 IO depths : 1=0.1%, 2=0.6%, 4=6.5%, 8=78.4%, 16=14.4%, 32=0.0%, >=64=0.0% 00:32:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 complete : 0=0.0%, 4=90.1%, 8=5.9%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 issued rwts: total=5769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.716 filename1: (groupid=0, jobs=1): err= 0: pid=3765180: Wed Jul 24 22:31:50 2024 00:32:57.716 read: IOPS=585, BW=2342KiB/s (2398kB/s)(22.9MiB/10008msec) 00:32:57.716 slat (nsec): min=6826, max=73659, avg=14296.59, stdev=8003.98 00:32:57.716 clat (usec): min=10190, max=61192, avg=27250.28, stdev=5392.56 00:32:57.716 lat (usec): min=10198, max=61212, avg=27264.58, stdev=5391.96 00:32:57.716 clat percentiles (usec): 00:32:57.716 | 1.00th=[15533], 5.00th=[21103], 10.00th=[22152], 20.00th=[23200], 00:32:57.716 | 30.00th=[23725], 40.00th=[24249], 50.00th=[25560], 60.00th=[28967], 00:32:57.716 | 70.00th=[30540], 80.00th=[31589], 90.00th=[33162], 95.00th=[34866], 00:32:57.716 | 99.00th=[45351], 99.50th=[47449], 99.90th=[57410], 99.95th=[61080], 00:32:57.716 | 99.99th=[61080] 00:32:57.716 bw ( KiB/s): min= 2176, max= 2512, per=4.06%, avg=2334.32, stdev=100.30, samples=19 00:32:57.716 iops : min= 544, max= 628, avg=583.58, stdev=25.07, samples=19 00:32:57.716 lat (msec) : 20=4.06%, 50=95.67%, 100=0.27% 00:32:57.716 cpu : usr=98.81%, sys=0.81%, ctx=13, majf=0, minf=55 00:32:57.716 IO depths : 1=0.1%, 2=0.2%, 4=7.0%, 8=78.6%, 16=14.1%, 32=0.0%, >=64=0.0% 00:32:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 complete : 0=0.0%, 4=90.3%, 8=5.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 issued rwts: total=5860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.716 filename1: (groupid=0, jobs=1): err= 0: pid=3765181: Wed Jul 24 22:31:50 2024 00:32:57.716 read: IOPS=626, BW=2508KiB/s (2568kB/s)(24.5MiB/10017msec) 00:32:57.716 slat (nsec): min=6444, max=73135, avg=14273.92, stdev=6968.71 00:32:57.716 clat (usec): min=11477, max=52499, avg=25435.29, stdev=4774.03 00:32:57.716 lat (usec): min=11494, max=52549, avg=25449.56, stdev=4775.02 00:32:57.716 clat percentiles (usec): 00:32:57.716 | 1.00th=[13829], 5.00th=[18744], 10.00th=[21627], 20.00th=[22676], 00:32:57.716 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23987], 60.00th=[24511], 00:32:57.716 | 70.00th=[26346], 80.00th=[30278], 90.00th=[32113], 95.00th=[33162], 00:32:57.716 | 99.00th=[39584], 99.50th=[41157], 99.90th=[46924], 99.95th=[46924], 00:32:57.716 | 99.99th=[52691] 00:32:57.716 bw ( KiB/s): min= 2304, max= 2760, per=4.36%, avg=2505.60, stdev=115.37, samples=20 00:32:57.716 iops : min= 576, max= 690, avg=626.40, stdev=28.84, samples=20 00:32:57.716 lat (msec) : 20=5.83%, 50=94.12%, 100=0.05% 00:32:57.716 cpu : usr=98.14%, sys=1.13%, ctx=24, majf=0, minf=56 00:32:57.716 IO depths : 1=0.8%, 2=1.7%, 4=8.8%, 8=75.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:32:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 issued rwts: total=6280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.716 filename1: (groupid=0, jobs=1): err= 0: pid=3765182: Wed Jul 24 22:31:50 2024 00:32:57.716 read: IOPS=596, BW=2386KiB/s (2443kB/s)(23.3MiB/10005msec) 00:32:57.716 slat (nsec): min=6341, max=71271, avg=15643.09, stdev=8065.19 00:32:57.716 clat (usec): min=7218, max=68716, avg=26739.86, stdev=5024.77 00:32:57.716 lat (usec): min=7233, max=68732, avg=26755.50, stdev=5023.88 00:32:57.716 clat percentiles (usec): 00:32:57.716 | 1.00th=[14615], 5.00th=[21103], 10.00th=[22414], 20.00th=[23200], 00:32:57.716 | 30.00th=[23725], 40.00th=[24249], 50.00th=[24773], 60.00th=[28443], 00:32:57.716 | 70.00th=[30016], 80.00th=[31065], 90.00th=[32375], 95.00th=[33817], 00:32:57.716 | 99.00th=[39060], 99.50th=[42206], 99.90th=[61604], 99.95th=[68682], 00:32:57.716 | 99.99th=[68682] 00:32:57.716 bw ( KiB/s): min= 2048, max= 2616, per=4.14%, avg=2376.00, stdev=202.37, samples=19 00:32:57.716 iops : min= 512, max= 654, avg=594.00, stdev=50.59, samples=19 00:32:57.716 lat (msec) : 10=0.17%, 20=3.67%, 50=95.89%, 100=0.27% 00:32:57.716 cpu : usr=98.44%, sys=1.15%, ctx=17, majf=0, minf=50 00:32:57.716 IO depths : 1=0.1%, 2=0.5%, 4=9.6%, 8=76.6%, 16=13.3%, 32=0.0%, >=64=0.0% 00:32:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 issued rwts: total=5967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.716 filename1: (groupid=0, jobs=1): err= 0: pid=3765183: Wed Jul 24 22:31:50 2024 00:32:57.716 read: IOPS=590, BW=2363KiB/s (2420kB/s)(23.1MiB/10021msec) 00:32:57.716 slat (nsec): min=6829, max=74686, avg=15208.17, stdev=7270.47 00:32:57.716 clat (usec): min=11967, max=47973, avg=26994.04, stdev=5246.38 00:32:57.716 lat (usec): min=11984, max=48002, avg=27009.25, stdev=5246.40 00:32:57.716 clat percentiles (usec): 00:32:57.716 | 1.00th=[15270], 5.00th=[21103], 10.00th=[22152], 20.00th=[23200], 00:32:57.716 | 30.00th=[23725], 40.00th=[24249], 50.00th=[25035], 60.00th=[28443], 00:32:57.716 | 70.00th=[30016], 80.00th=[31327], 90.00th=[32900], 95.00th=[34866], 00:32:57.716 | 99.00th=[45876], 99.50th=[46924], 99.90th=[47449], 99.95th=[47973], 00:32:57.716 | 99.99th=[47973] 00:32:57.716 bw ( KiB/s): min= 2192, max= 2504, per=4.11%, avg=2361.60, stdev=89.26, samples=20 00:32:57.716 iops : min= 548, max= 626, avg=590.40, stdev=22.31, samples=20 00:32:57.716 lat (msec) : 20=3.92%, 50=96.08% 00:32:57.716 cpu : usr=98.77%, sys=0.82%, ctx=15, majf=0, minf=59 00:32:57.716 IO depths : 1=0.2%, 2=0.6%, 4=7.8%, 8=78.2%, 16=13.1%, 32=0.0%, >=64=0.0% 00:32:57.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.716 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.716 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.717 filename1: (groupid=0, jobs=1): err= 0: pid=3765184: Wed Jul 24 22:31:50 2024 00:32:57.717 read: IOPS=626, BW=2507KiB/s (2567kB/s)(24.5MiB/10017msec) 00:32:57.717 slat (nsec): min=6801, max=74884, avg=14983.71, stdev=7310.72 00:32:57.717 clat (usec): min=8387, max=48026, avg=25442.69, stdev=4548.31 00:32:57.717 lat (usec): min=8399, max=48036, avg=25457.68, stdev=4548.85 00:32:57.717 clat percentiles (usec): 00:32:57.717 | 1.00th=[14746], 5.00th=[20055], 10.00th=[21890], 20.00th=[22676], 00:32:57.717 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24511], 00:32:57.717 | 70.00th=[25560], 80.00th=[30016], 90.00th=[31589], 95.00th=[33424], 00:32:57.717 | 99.00th=[39060], 99.50th=[40109], 99.90th=[46924], 99.95th=[46924], 00:32:57.717 | 99.99th=[47973] 00:32:57.717 bw ( KiB/s): min= 2304, max= 2640, per=4.36%, avg=2504.80, stdev=89.38, samples=20 00:32:57.717 iops : min= 576, max= 660, avg=626.20, stdev=22.35, samples=20 00:32:57.717 lat (msec) : 10=0.02%, 20=4.76%, 50=95.22% 00:32:57.717 cpu : usr=98.75%, sys=0.85%, ctx=14, majf=0, minf=90 00:32:57.717 IO depths : 1=0.5%, 2=1.0%, 4=7.7%, 8=77.8%, 16=13.0%, 32=0.0%, >=64=0.0% 00:32:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 complete : 0=0.0%, 4=89.9%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 issued rwts: total=6278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.717 filename2: (groupid=0, jobs=1): err= 0: pid=3765185: Wed Jul 24 22:31:50 2024 00:32:57.717 read: IOPS=601, BW=2407KiB/s (2465kB/s)(23.6MiB/10021msec) 00:32:57.717 slat (nsec): min=6749, max=73236, avg=13779.17, stdev=6698.02 00:32:57.717 clat (usec): min=11940, max=49276, avg=26496.51, stdev=5134.55 00:32:57.717 lat (usec): min=11952, max=49287, avg=26510.29, stdev=5134.54 00:32:57.717 clat percentiles (usec): 00:32:57.717 | 1.00th=[15926], 5.00th=[19530], 10.00th=[21890], 20.00th=[22938], 00:32:57.717 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24773], 60.00th=[27132], 00:32:57.717 | 70.00th=[29492], 80.00th=[31065], 90.00th=[32637], 95.00th=[34341], 00:32:57.717 | 99.00th=[43779], 99.50th=[46400], 99.90th=[49021], 99.95th=[49021], 00:32:57.717 | 99.99th=[49021] 00:32:57.717 bw ( KiB/s): min= 2224, max= 2592, per=4.19%, avg=2406.00, stdev=126.19, samples=20 00:32:57.717 iops : min= 556, max= 648, avg=601.50, stdev=31.55, samples=20 00:32:57.717 lat (msec) : 20=5.60%, 50=94.40% 00:32:57.717 cpu : usr=98.65%, sys=0.98%, ctx=16, majf=0, minf=65 00:32:57.717 IO depths : 1=0.1%, 2=0.3%, 4=7.6%, 8=77.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:32:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 complete : 0=0.0%, 4=90.3%, 8=5.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 issued rwts: total=6031,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.717 filename2: (groupid=0, jobs=1): err= 0: pid=3765186: Wed Jul 24 22:31:50 2024 00:32:57.717 read: IOPS=602, BW=2409KiB/s (2467kB/s)(23.6MiB/10017msec) 00:32:57.717 slat (nsec): min=6811, max=71938, avg=14725.60, stdev=7031.84 00:32:57.717 clat (usec): min=10989, max=52480, avg=26480.98, stdev=5036.80 00:32:57.717 lat (usec): min=10998, max=52526, avg=26495.71, stdev=5037.38 00:32:57.717 clat percentiles (usec): 00:32:57.717 | 1.00th=[14615], 5.00th=[20841], 10.00th=[22152], 20.00th=[23200], 00:32:57.717 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24511], 60.00th=[25822], 00:32:57.717 | 70.00th=[29754], 80.00th=[31065], 90.00th=[32637], 95.00th=[34341], 00:32:57.717 | 99.00th=[41681], 99.50th=[44303], 99.90th=[46924], 99.95th=[48497], 00:32:57.717 | 99.99th=[52691] 00:32:57.717 bw ( KiB/s): min= 2160, max= 2560, per=4.19%, avg=2406.40, stdev=123.35, samples=20 00:32:57.717 iops : min= 540, max= 640, avg=601.60, stdev=30.84, samples=20 00:32:57.717 lat (msec) : 20=4.26%, 50=95.69%, 100=0.05% 00:32:57.717 cpu : usr=98.63%, sys=0.97%, ctx=13, majf=0, minf=62 00:32:57.717 IO depths : 1=0.4%, 2=1.0%, 4=8.2%, 8=77.4%, 16=13.1%, 32=0.0%, >=64=0.0% 00:32:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 complete : 0=0.0%, 4=90.2%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.717 filename2: (groupid=0, jobs=1): err= 0: pid=3765187: Wed Jul 24 22:31:50 2024 00:32:57.717 read: IOPS=628, BW=2515KiB/s (2575kB/s)(24.6MiB/10010msec) 00:32:57.717 slat (nsec): min=6753, max=59604, avg=13900.12, stdev=5823.60 00:32:57.717 clat (usec): min=10552, max=61491, avg=25371.80, stdev=4946.67 00:32:57.717 lat (usec): min=10565, max=61511, avg=25385.70, stdev=4946.57 00:32:57.717 clat percentiles (usec): 00:32:57.717 | 1.00th=[13566], 5.00th=[19530], 10.00th=[21890], 20.00th=[22676], 00:32:57.717 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24511], 00:32:57.717 | 70.00th=[25297], 80.00th=[30016], 90.00th=[31851], 95.00th=[33424], 00:32:57.717 | 99.00th=[40109], 99.50th=[43779], 99.90th=[57410], 99.95th=[61604], 00:32:57.717 | 99.99th=[61604] 00:32:57.717 bw ( KiB/s): min= 2176, max= 2656, per=4.36%, avg=2503.16, stdev=128.41, samples=19 00:32:57.717 iops : min= 544, max= 664, avg=625.79, stdev=32.10, samples=19 00:32:57.717 lat (msec) : 20=5.50%, 50=94.25%, 100=0.25% 00:32:57.717 cpu : usr=98.72%, sys=0.88%, ctx=15, majf=0, minf=55 00:32:57.717 IO depths : 1=0.3%, 2=0.8%, 4=7.5%, 8=78.1%, 16=13.3%, 32=0.0%, >=64=0.0% 00:32:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 issued rwts: total=6293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.717 filename2: (groupid=0, jobs=1): err= 0: pid=3765188: Wed Jul 24 22:31:50 2024 00:32:57.717 read: IOPS=626, BW=2508KiB/s (2568kB/s)(24.5MiB/10013msec) 00:32:57.717 slat (nsec): min=6807, max=74604, avg=14786.12, stdev=6510.32 00:32:57.717 clat (usec): min=10927, max=48447, avg=25438.29, stdev=4382.56 00:32:57.717 lat (usec): min=10939, max=48457, avg=25453.08, stdev=4383.02 00:32:57.717 clat percentiles (usec): 00:32:57.717 | 1.00th=[15139], 5.00th=[20579], 10.00th=[21890], 20.00th=[22676], 00:32:57.717 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24511], 00:32:57.717 | 70.00th=[25297], 80.00th=[29754], 90.00th=[31589], 95.00th=[33162], 00:32:57.717 | 99.00th=[38536], 99.50th=[40109], 99.90th=[46924], 99.95th=[46924], 00:32:57.717 | 99.99th=[48497] 00:32:57.717 bw ( KiB/s): min= 2280, max= 2640, per=4.36%, avg=2504.40, stdev=88.88, samples=20 00:32:57.717 iops : min= 570, max= 660, avg=626.10, stdev=22.22, samples=20 00:32:57.717 lat (msec) : 20=4.30%, 50=95.70% 00:32:57.717 cpu : usr=98.52%, sys=1.07%, ctx=9, majf=0, minf=53 00:32:57.717 IO depths : 1=0.6%, 2=1.3%, 4=7.9%, 8=77.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:32:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 complete : 0=0.0%, 4=89.8%, 8=5.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 issued rwts: total=6277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.717 filename2: (groupid=0, jobs=1): err= 0: pid=3765189: Wed Jul 24 22:31:50 2024 00:32:57.717 read: IOPS=615, BW=2463KiB/s (2522kB/s)(24.1MiB/10005msec) 00:32:57.717 slat (nsec): min=6513, max=77879, avg=15471.94, stdev=7971.05 00:32:57.717 clat (usec): min=6794, max=59911, avg=25904.88, stdev=5072.23 00:32:57.717 lat (usec): min=6800, max=59935, avg=25920.35, stdev=5072.03 00:32:57.717 clat percentiles (usec): 00:32:57.717 | 1.00th=[14746], 5.00th=[20055], 10.00th=[21890], 20.00th=[22676], 00:32:57.717 | 30.00th=[23200], 40.00th=[23725], 50.00th=[24249], 60.00th=[25035], 00:32:57.717 | 70.00th=[28443], 80.00th=[30540], 90.00th=[32375], 95.00th=[34341], 00:32:57.717 | 99.00th=[41681], 99.50th=[45351], 99.90th=[49546], 99.95th=[49546], 00:32:57.717 | 99.99th=[60031] 00:32:57.717 bw ( KiB/s): min= 2256, max= 2664, per=4.27%, avg=2455.16, stdev=119.22, samples=19 00:32:57.717 iops : min= 564, max= 666, avg=613.79, stdev=29.81, samples=19 00:32:57.717 lat (msec) : 10=0.16%, 20=4.71%, 50=95.08%, 100=0.05% 00:32:57.717 cpu : usr=98.64%, sys=0.96%, ctx=14, majf=0, minf=63 00:32:57.717 IO depths : 1=0.1%, 2=0.4%, 4=6.6%, 8=79.4%, 16=13.5%, 32=0.0%, >=64=0.0% 00:32:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.717 filename2: (groupid=0, jobs=1): err= 0: pid=3765190: Wed Jul 24 22:31:50 2024 00:32:57.717 read: IOPS=647, BW=2588KiB/s (2650kB/s)(25.3MiB/10018msec) 00:32:57.717 slat (nsec): min=4183, max=71002, avg=13129.38, stdev=5900.03 00:32:57.717 clat (usec): min=3239, max=46627, avg=24631.36, stdev=4783.28 00:32:57.717 lat (usec): min=3246, max=46641, avg=24644.49, stdev=4784.20 00:32:57.717 clat percentiles (usec): 00:32:57.717 | 1.00th=[10421], 5.00th=[18220], 10.00th=[21627], 20.00th=[22414], 00:32:57.717 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[24249], 00:32:57.717 | 70.00th=[24773], 80.00th=[26870], 90.00th=[31065], 95.00th=[33162], 00:32:57.717 | 99.00th=[40109], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:32:57.717 | 99.99th=[46400] 00:32:57.717 bw ( KiB/s): min= 2304, max= 2864, per=4.51%, avg=2590.40, stdev=142.51, samples=20 00:32:57.717 iops : min= 576, max= 716, avg=647.60, stdev=35.63, samples=20 00:32:57.717 lat (msec) : 4=0.34%, 10=0.49%, 20=5.48%, 50=93.69% 00:32:57.717 cpu : usr=98.58%, sys=0.99%, ctx=20, majf=0, minf=42 00:32:57.717 IO depths : 1=0.5%, 2=1.2%, 4=7.9%, 8=77.4%, 16=12.9%, 32=0.0%, >=64=0.0% 00:32:57.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.717 issued rwts: total=6482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.717 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.717 filename2: (groupid=0, jobs=1): err= 0: pid=3765191: Wed Jul 24 22:31:50 2024 00:32:57.717 read: IOPS=537, BW=2151KiB/s (2202kB/s)(21.0MiB/10016msec) 00:32:57.717 slat (usec): min=6, max=109, avg=49.71, stdev=26.96 00:32:57.717 clat (usec): min=11990, max=49936, avg=29508.56, stdev=5765.28 00:32:57.717 lat (usec): min=12004, max=49959, avg=29558.27, stdev=5769.13 00:32:57.717 clat percentiles (usec): 00:32:57.717 | 1.00th=[16712], 5.00th=[21890], 10.00th=[22938], 20.00th=[23987], 00:32:57.717 | 30.00th=[26870], 40.00th=[28967], 50.00th=[30016], 60.00th=[30802], 00:32:57.717 | 70.00th=[31589], 80.00th=[32637], 90.00th=[34341], 95.00th=[40109], 00:32:57.717 | 99.00th=[47449], 99.50th=[47973], 99.90th=[48497], 99.95th=[49021], 00:32:57.717 | 99.99th=[50070] 00:32:57.718 bw ( KiB/s): min= 1904, max= 2408, per=3.74%, avg=2147.60, stdev=157.31, samples=20 00:32:57.718 iops : min= 476, max= 602, avg=536.90, stdev=39.33, samples=20 00:32:57.718 lat (msec) : 20=3.08%, 50=96.92% 00:32:57.718 cpu : usr=98.74%, sys=0.85%, ctx=11, majf=0, minf=57 00:32:57.718 IO depths : 1=0.2%, 2=0.5%, 4=6.8%, 8=79.1%, 16=13.5%, 32=0.0%, >=64=0.0% 00:32:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.718 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.718 issued rwts: total=5385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.718 filename2: (groupid=0, jobs=1): err= 0: pid=3765192: Wed Jul 24 22:31:50 2024 00:32:57.718 read: IOPS=604, BW=2416KiB/s (2474kB/s)(23.6MiB/10004msec) 00:32:57.718 slat (nsec): min=6819, max=75373, avg=14789.41, stdev=8167.73 00:32:57.718 clat (usec): min=3587, max=65785, avg=26396.03, stdev=5143.56 00:32:57.718 lat (usec): min=3600, max=65809, avg=26410.82, stdev=5142.24 00:32:57.718 clat percentiles (usec): 00:32:57.718 | 1.00th=[13435], 5.00th=[19006], 10.00th=[22152], 20.00th=[22938], 00:32:57.718 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24511], 60.00th=[27395], 00:32:57.718 | 70.00th=[29754], 80.00th=[31327], 90.00th=[32375], 95.00th=[33817], 00:32:57.718 | 99.00th=[39584], 99.50th=[44303], 99.90th=[49546], 99.95th=[49546], 00:32:57.718 | 99.99th=[65799] 00:32:57.718 bw ( KiB/s): min= 2016, max= 2688, per=4.17%, avg=2395.37, stdev=203.06, samples=19 00:32:57.718 iops : min= 504, max= 672, avg=598.84, stdev=50.76, samples=19 00:32:57.718 lat (msec) : 4=0.07%, 10=0.46%, 20=5.05%, 50=94.41%, 100=0.02% 00:32:57.718 cpu : usr=98.82%, sys=0.79%, ctx=22, majf=0, minf=60 00:32:57.718 IO depths : 1=0.2%, 2=1.3%, 4=11.4%, 8=74.1%, 16=13.0%, 32=0.0%, >=64=0.0% 00:32:57.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.718 complete : 0=0.0%, 4=91.3%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:57.718 issued rwts: total=6043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:57.718 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:57.718 00:32:57.718 Run status group 0 (all jobs): 00:32:57.718 READ: bw=56.1MiB/s (58.8MB/s), 2151KiB/s-2588KiB/s (2202kB/s-2650kB/s), io=563MiB (590MB), run=10003-10027msec 00:32:57.718 22:31:50 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:57.718 22:31:50 -- target/dif.sh@43 -- # local sub 00:32:57.718 22:31:50 -- target/dif.sh@45 -- # for sub in "$@" 00:32:57.718 22:31:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:57.718 22:31:50 -- target/dif.sh@36 -- # local sub_id=0 00:32:57.718 22:31:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@45 -- # for sub in "$@" 00:32:57.718 22:31:50 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:57.718 22:31:50 -- target/dif.sh@36 -- # local sub_id=1 00:32:57.718 22:31:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@45 -- # for sub in "$@" 00:32:57.718 22:31:50 -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:57.718 22:31:50 -- target/dif.sh@36 -- # local sub_id=2 00:32:57.718 22:31:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@115 -- # NULL_DIF=1 00:32:57.718 22:31:50 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:57.718 22:31:50 -- target/dif.sh@115 -- # numjobs=2 00:32:57.718 22:31:50 -- target/dif.sh@115 -- # iodepth=8 00:32:57.718 22:31:50 -- target/dif.sh@115 -- # runtime=5 00:32:57.718 22:31:50 -- target/dif.sh@115 -- # files=1 00:32:57.718 22:31:50 -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:57.718 22:31:50 -- target/dif.sh@28 -- # local sub 00:32:57.718 22:31:50 -- target/dif.sh@30 -- # for sub in "$@" 00:32:57.718 22:31:50 -- target/dif.sh@31 -- # create_subsystem 0 00:32:57.718 22:31:50 -- target/dif.sh@18 -- # local sub_id=0 00:32:57.718 22:31:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 bdev_null0 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 [2024-07-24 22:31:50.957365] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@30 -- # for sub in "$@" 00:32:57.718 22:31:50 -- target/dif.sh@31 -- # create_subsystem 1 00:32:57.718 22:31:50 -- target/dif.sh@18 -- # local sub_id=1 00:32:57.718 22:31:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 bdev_null1 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:57.718 22:31:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:57.718 22:31:50 -- common/autotest_common.sh@10 -- # set +x 00:32:57.718 22:31:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:57.718 22:31:50 -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:57.718 22:31:50 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:57.718 22:31:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:57.718 22:31:50 -- nvmf/common.sh@520 -- # config=() 00:32:57.718 22:31:50 -- nvmf/common.sh@520 -- # local subsystem config 00:32:57.718 22:31:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:57.718 22:31:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:57.718 22:31:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:57.718 { 00:32:57.718 "params": { 00:32:57.718 "name": "Nvme$subsystem", 00:32:57.718 "trtype": "$TEST_TRANSPORT", 00:32:57.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.718 "adrfam": "ipv4", 00:32:57.718 "trsvcid": "$NVMF_PORT", 00:32:57.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.718 "hdgst": ${hdgst:-false}, 00:32:57.718 "ddgst": ${ddgst:-false} 00:32:57.718 }, 00:32:57.718 "method": "bdev_nvme_attach_controller" 00:32:57.718 } 00:32:57.718 EOF 00:32:57.718 )") 00:32:57.718 22:31:50 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:57.718 22:31:50 -- target/dif.sh@82 -- # gen_fio_conf 00:32:57.718 22:31:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:32:57.718 22:31:50 -- target/dif.sh@54 -- # local file 00:32:57.718 22:31:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:57.718 22:31:50 -- target/dif.sh@56 -- # cat 00:32:57.718 22:31:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:32:57.718 22:31:50 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:57.718 22:31:50 -- common/autotest_common.sh@1320 -- # shift 00:32:57.718 22:31:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:32:57.718 22:31:50 -- nvmf/common.sh@542 -- # cat 00:32:57.718 22:31:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:57.718 22:31:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:57.718 22:31:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:57.718 22:31:51 -- target/dif.sh@72 -- # (( file <= files )) 00:32:57.718 22:31:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:57.718 22:31:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:32:57.718 22:31:51 -- target/dif.sh@73 -- # cat 00:32:57.719 22:31:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:32:57.719 22:31:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:32:57.719 { 00:32:57.719 "params": { 00:32:57.719 "name": "Nvme$subsystem", 00:32:57.719 "trtype": "$TEST_TRANSPORT", 00:32:57.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:57.719 "adrfam": "ipv4", 00:32:57.719 "trsvcid": "$NVMF_PORT", 00:32:57.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:57.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:57.719 "hdgst": ${hdgst:-false}, 00:32:57.719 "ddgst": ${ddgst:-false} 00:32:57.719 }, 00:32:57.719 "method": "bdev_nvme_attach_controller" 00:32:57.719 } 00:32:57.719 EOF 00:32:57.719 )") 00:32:57.719 22:31:51 -- nvmf/common.sh@542 -- # cat 00:32:57.719 22:31:51 -- target/dif.sh@72 -- # (( file++ )) 00:32:57.719 22:31:51 -- target/dif.sh@72 -- # (( file <= files )) 00:32:57.719 22:31:51 -- nvmf/common.sh@544 -- # jq . 00:32:57.719 22:31:51 -- nvmf/common.sh@545 -- # IFS=, 00:32:57.719 22:31:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:32:57.719 "params": { 00:32:57.719 "name": "Nvme0", 00:32:57.719 "trtype": "tcp", 00:32:57.719 "traddr": "10.0.0.2", 00:32:57.719 "adrfam": "ipv4", 00:32:57.719 "trsvcid": "4420", 00:32:57.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:57.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:57.719 "hdgst": false, 00:32:57.719 "ddgst": false 00:32:57.719 }, 00:32:57.719 "method": "bdev_nvme_attach_controller" 00:32:57.719 },{ 00:32:57.719 "params": { 00:32:57.719 "name": "Nvme1", 00:32:57.719 "trtype": "tcp", 00:32:57.719 "traddr": "10.0.0.2", 00:32:57.719 "adrfam": "ipv4", 00:32:57.719 "trsvcid": "4420", 00:32:57.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:57.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:57.719 "hdgst": false, 00:32:57.719 "ddgst": false 00:32:57.719 }, 00:32:57.719 "method": "bdev_nvme_attach_controller" 00:32:57.719 }' 00:32:57.719 22:31:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:57.719 22:31:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:57.719 22:31:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:32:57.719 22:31:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:57.719 22:31:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:32:57.719 22:31:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:32:57.719 22:31:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:32:57.719 22:31:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:32:57.719 22:31:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:57.719 22:31:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:57.719 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:57.719 ... 00:32:57.719 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:57.719 ... 00:32:57.719 fio-3.35 00:32:57.719 Starting 4 threads 00:32:57.719 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.719 [2024-07-24 22:31:51.931236] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:32:57.719 [2024-07-24 22:31:51.931292] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:02.988 00:33:02.988 filename0: (groupid=0, jobs=1): err= 0: pid=3767169: Wed Jul 24 22:31:57 2024 00:33:02.988 read: IOPS=463, BW=3708KiB/s (3797kB/s)(18.1MiB/5001msec) 00:33:02.988 slat (nsec): min=6192, max=48807, avg=10614.97, stdev=5286.55 00:33:02.988 clat (usec): min=1612, max=47077, avg=17214.04, stdev=19459.58 00:33:02.988 lat (usec): min=1622, max=47097, avg=17224.66, stdev=19459.55 00:33:02.988 clat percentiles (usec): 00:33:02.988 | 1.00th=[ 2212], 5.00th=[ 2540], 10.00th=[ 2835], 20.00th=[ 3195], 00:33:02.988 | 30.00th=[ 3621], 40.00th=[ 4080], 50.00th=[ 4490], 60.00th=[ 5080], 00:33:02.988 | 70.00th=[44303], 80.00th=[45351], 90.00th=[45876], 95.00th=[46400], 00:33:02.988 | 99.00th=[46924], 99.50th=[46924], 99.90th=[46924], 99.95th=[46924], 00:33:02.988 | 99.99th=[46924] 00:33:02.988 bw ( KiB/s): min= 1920, max= 7472, per=5.04%, avg=3665.78, stdev=2134.36, samples=9 00:33:02.988 iops : min= 240, max= 934, avg=458.22, stdev=266.79, samples=9 00:33:02.988 lat (msec) : 2=0.43%, 4=37.49%, 10=29.98%, 50=32.10% 00:33:02.989 cpu : usr=98.48%, sys=1.20%, ctx=10, majf=0, minf=18 00:33:02.989 IO depths : 1=6.8%, 2=15.4%, 4=59.1%, 8=18.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.989 complete : 0=0.0%, 4=89.9%, 8=10.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.989 issued rwts: total=2318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:02.989 filename0: (groupid=0, jobs=1): err= 0: pid=3767170: Wed Jul 24 22:31:57 2024 00:33:02.989 read: IOPS=2746, BW=21.5MiB/s (22.5MB/s)(107MiB/5001msec) 00:33:02.989 slat (nsec): min=6081, max=51454, avg=8463.54, stdev=3185.61 00:33:02.989 clat (usec): min=907, max=46395, avg=2890.12, stdev=4540.82 00:33:02.989 lat (usec): min=918, max=46419, avg=2898.58, stdev=4541.06 00:33:02.989 clat percentiles (usec): 00:33:02.989 | 1.00th=[ 1336], 5.00th=[ 1549], 10.00th=[ 1696], 20.00th=[ 1926], 00:33:02.989 | 30.00th=[ 2089], 40.00th=[ 2212], 50.00th=[ 2311], 60.00th=[ 2442], 00:33:02.989 | 70.00th=[ 2573], 80.00th=[ 2835], 90.00th=[ 3261], 95.00th=[ 3818], 00:33:02.989 | 99.00th=[43254], 99.50th=[44303], 99.90th=[45876], 99.95th=[46400], 00:33:02.989 | 99.99th=[46400] 00:33:02.989 bw ( KiB/s): min=18752, max=26080, per=30.89%, avg=22469.33, stdev=2334.82, samples=9 00:33:02.989 iops : min= 2344, max= 3260, avg=2808.67, stdev=291.85, samples=9 00:33:02.989 lat (usec) : 1000=0.01% 00:33:02.989 lat (msec) : 2=24.48%, 4=71.54%, 10=2.80%, 50=1.17% 00:33:02.989 cpu : usr=96.88%, sys=2.78%, ctx=8, majf=0, minf=59 00:33:02.989 IO depths : 1=0.5%, 2=1.9%, 4=67.4%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.989 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.989 issued rwts: total=13733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:02.989 filename1: (groupid=0, jobs=1): err= 0: pid=3767171: Wed Jul 24 22:31:57 2024 00:33:02.989 read: IOPS=3218, BW=25.1MiB/s (26.4MB/s)(126MiB/5028msec) 00:33:02.989 slat (nsec): min=6074, max=52611, avg=8425.16, stdev=3155.89 00:33:02.989 clat (usec): min=890, max=45695, avg=2463.99, stdev=3160.54 00:33:02.989 lat (usec): min=899, max=45709, avg=2472.41, stdev=3160.73 00:33:02.989 clat percentiles (usec): 00:33:02.989 | 1.00th=[ 1303], 5.00th=[ 1418], 10.00th=[ 1532], 20.00th=[ 1729], 00:33:02.989 | 30.00th=[ 1893], 40.00th=[ 2057], 50.00th=[ 2180], 60.00th=[ 2278], 00:33:02.989 | 70.00th=[ 2442], 80.00th=[ 2638], 90.00th=[ 2966], 95.00th=[ 3425], 00:33:02.989 | 99.00th=[ 4621], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:33:02.989 | 99.99th=[45876] 00:33:02.989 bw ( KiB/s): min=20112, max=30432, per=35.58%, avg=25878.40, stdev=3128.67, samples=10 00:33:02.989 iops : min= 2514, max= 3804, avg=3234.80, stdev=391.08, samples=10 00:33:02.989 lat (usec) : 1000=0.04% 00:33:02.989 lat (msec) : 2=36.67%, 4=61.37%, 10=1.38%, 50=0.54% 00:33:02.989 cpu : usr=96.48%, sys=3.20%, ctx=6, majf=0, minf=36 00:33:02.989 IO depths : 1=0.3%, 2=1.5%, 4=65.5%, 8=32.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.989 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.989 issued rwts: total=16182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:02.989 filename1: (groupid=0, jobs=1): err= 0: pid=3767172: Wed Jul 24 22:31:57 2024 00:33:02.989 read: IOPS=2684, BW=21.0MiB/s (22.0MB/s)(105MiB/5020msec) 00:33:02.989 slat (nsec): min=6062, max=49701, avg=8529.74, stdev=3127.26 00:33:02.989 clat (usec): min=1159, max=46839, avg=2957.31, stdev=4596.36 00:33:02.989 lat (usec): min=1166, max=46863, avg=2965.84, stdev=4596.44 00:33:02.989 clat percentiles (usec): 00:33:02.989 | 1.00th=[ 1418], 5.00th=[ 1614], 10.00th=[ 1745], 20.00th=[ 1975], 00:33:02.989 | 30.00th=[ 2147], 40.00th=[ 2245], 50.00th=[ 2343], 60.00th=[ 2474], 00:33:02.989 | 70.00th=[ 2638], 80.00th=[ 2900], 90.00th=[ 3392], 95.00th=[ 3916], 00:33:02.989 | 99.00th=[42730], 99.50th=[44827], 99.90th=[45876], 99.95th=[46924], 00:33:02.989 | 99.99th=[46924] 00:33:02.989 bw ( KiB/s): min=14880, max=25584, per=29.63%, avg=21548.80, stdev=3241.15, samples=10 00:33:02.989 iops : min= 1860, max= 3198, avg=2693.60, stdev=405.14, samples=10 00:33:02.989 lat (msec) : 2=22.04%, 4=73.42%, 10=3.35%, 50=1.19% 00:33:02.989 cpu : usr=97.03%, sys=2.63%, ctx=7, majf=0, minf=51 00:33:02.989 IO depths : 1=0.3%, 2=1.8%, 4=65.3%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:02.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.989 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:02.989 issued rwts: total=13476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:02.989 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:02.989 00:33:02.989 Run status group 0 (all jobs): 00:33:02.989 READ: bw=71.0MiB/s (74.5MB/s), 3708KiB/s-25.1MiB/s (3797kB/s-26.4MB/s), io=357MiB (374MB), run=5001-5028msec 00:33:02.989 22:31:57 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:02.989 22:31:57 -- target/dif.sh@43 -- # local sub 00:33:02.989 22:31:57 -- target/dif.sh@45 -- # for sub in "$@" 00:33:02.989 22:31:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:02.989 22:31:57 -- target/dif.sh@36 -- # local sub_id=0 00:33:02.989 22:31:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:02.989 22:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 22:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.989 22:31:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:02.989 22:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 22:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.989 22:31:57 -- target/dif.sh@45 -- # for sub in "$@" 00:33:02.989 22:31:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:02.989 22:31:57 -- target/dif.sh@36 -- # local sub_id=1 00:33:02.989 22:31:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:02.989 22:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 22:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.989 22:31:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:02.989 22:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 22:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.989 00:33:02.989 real 0m23.852s 00:33:02.989 user 4m51.824s 00:33:02.989 sys 0m4.205s 00:33:02.989 22:31:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 ************************************ 00:33:02.989 END TEST fio_dif_rand_params 00:33:02.989 ************************************ 00:33:02.989 22:31:57 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:02.989 22:31:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:02.989 22:31:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 ************************************ 00:33:02.989 START TEST fio_dif_digest 00:33:02.989 ************************************ 00:33:02.989 22:31:57 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:33:02.989 22:31:57 -- target/dif.sh@123 -- # local NULL_DIF 00:33:02.989 22:31:57 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:02.989 22:31:57 -- target/dif.sh@125 -- # local hdgst ddgst 00:33:02.989 22:31:57 -- target/dif.sh@127 -- # NULL_DIF=3 00:33:02.989 22:31:57 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:02.989 22:31:57 -- target/dif.sh@127 -- # numjobs=3 00:33:02.989 22:31:57 -- target/dif.sh@127 -- # iodepth=3 00:33:02.989 22:31:57 -- target/dif.sh@127 -- # runtime=10 00:33:02.989 22:31:57 -- target/dif.sh@128 -- # hdgst=true 00:33:02.989 22:31:57 -- target/dif.sh@128 -- # ddgst=true 00:33:02.989 22:31:57 -- target/dif.sh@130 -- # create_subsystems 0 00:33:02.989 22:31:57 -- target/dif.sh@28 -- # local sub 00:33:02.989 22:31:57 -- target/dif.sh@30 -- # for sub in "$@" 00:33:02.989 22:31:57 -- target/dif.sh@31 -- # create_subsystem 0 00:33:02.989 22:31:57 -- target/dif.sh@18 -- # local sub_id=0 00:33:02.989 22:31:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:02.989 22:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 bdev_null0 00:33:02.989 22:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.989 22:31:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:02.989 22:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 22:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.989 22:31:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:02.989 22:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 22:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.989 22:31:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:02.989 22:31:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:02.989 22:31:57 -- common/autotest_common.sh@10 -- # set +x 00:33:02.989 [2024-07-24 22:31:57.349428] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:02.989 22:31:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:02.989 22:31:57 -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:02.989 22:31:57 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:02.989 22:31:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:02.989 22:31:57 -- nvmf/common.sh@520 -- # config=() 00:33:02.989 22:31:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:02.989 22:31:57 -- nvmf/common.sh@520 -- # local subsystem config 00:33:02.989 22:31:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:33:02.989 22:31:57 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:02.989 22:31:57 -- target/dif.sh@82 -- # gen_fio_conf 00:33:02.989 22:31:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:33:02.990 { 00:33:02.990 "params": { 00:33:02.990 "name": "Nvme$subsystem", 00:33:02.990 "trtype": "$TEST_TRANSPORT", 00:33:02.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:02.990 "adrfam": "ipv4", 00:33:02.990 "trsvcid": "$NVMF_PORT", 00:33:02.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:02.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:02.990 "hdgst": ${hdgst:-false}, 00:33:02.990 "ddgst": ${ddgst:-false} 00:33:02.990 }, 00:33:02.990 "method": "bdev_nvme_attach_controller" 00:33:02.990 } 00:33:02.990 EOF 00:33:02.990 )") 00:33:02.990 22:31:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:33:02.990 22:31:57 -- target/dif.sh@54 -- # local file 00:33:02.990 22:31:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:02.990 22:31:57 -- target/dif.sh@56 -- # cat 00:33:02.990 22:31:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:33:02.990 22:31:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:02.990 22:31:57 -- common/autotest_common.sh@1320 -- # shift 00:33:02.990 22:31:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:33:02.990 22:31:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:02.990 22:31:57 -- nvmf/common.sh@542 -- # cat 00:33:02.990 22:31:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:33:02.990 22:31:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:02.990 22:31:57 -- target/dif.sh@72 -- # (( file <= files )) 00:33:02.990 22:31:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:33:02.990 22:31:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:02.990 22:31:57 -- nvmf/common.sh@544 -- # jq . 00:33:02.990 22:31:57 -- nvmf/common.sh@545 -- # IFS=, 00:33:02.990 22:31:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:33:02.990 "params": { 00:33:02.990 "name": "Nvme0", 00:33:02.990 "trtype": "tcp", 00:33:02.990 "traddr": "10.0.0.2", 00:33:02.990 "adrfam": "ipv4", 00:33:02.990 "trsvcid": "4420", 00:33:02.990 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:02.990 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:02.990 "hdgst": true, 00:33:02.990 "ddgst": true 00:33:02.990 }, 00:33:02.990 "method": "bdev_nvme_attach_controller" 00:33:02.990 }' 00:33:02.990 22:31:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:02.990 22:31:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:02.990 22:31:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:33:02.990 22:31:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:02.990 22:31:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:33:02.990 22:31:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:33:02.990 22:31:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:33:02.990 22:31:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:33:02.990 22:31:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:02.990 22:31:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:02.990 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:02.990 ... 00:33:02.990 fio-3.35 00:33:02.990 Starting 3 threads 00:33:02.990 EAL: No free 2048 kB hugepages reported on node 1 00:33:02.990 [2024-07-24 22:31:57.953800] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:33:02.990 [2024-07-24 22:31:57.953849] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:33:13.004 00:33:13.004 filename0: (groupid=0, jobs=1): err= 0: pid=3768240: Wed Jul 24 22:32:08 2024 00:33:13.004 read: IOPS=255, BW=32.0MiB/s (33.5MB/s)(320MiB/10006msec) 00:33:13.004 slat (nsec): min=6265, max=45292, avg=19555.63, stdev=8653.33 00:33:13.004 clat (usec): min=5228, max=60275, avg=11700.45, stdev=9492.25 00:33:13.004 lat (usec): min=5236, max=60300, avg=11720.01, stdev=9492.93 00:33:13.004 clat percentiles (usec): 00:33:13.004 | 1.00th=[ 5669], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 7701], 00:33:13.004 | 30.00th=[ 8586], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10421], 00:33:13.004 | 70.00th=[10814], 80.00th=[11469], 90.00th=[12649], 95.00th=[18744], 00:33:13.004 | 99.00th=[53740], 99.50th=[55837], 99.90th=[58983], 99.95th=[59507], 00:33:13.004 | 99.99th=[60031] 00:33:13.004 bw ( KiB/s): min=20736, max=41472, per=35.36%, avg=32256.00, stdev=4679.35, samples=19 00:33:13.004 iops : min= 162, max= 324, avg=252.00, stdev=36.56, samples=19 00:33:13.005 lat (msec) : 10=50.70%, 20=44.38%, 50=0.31%, 100=4.61% 00:33:13.005 cpu : usr=96.05%, sys=3.54%, ctx=24, majf=0, minf=186 00:33:13.005 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:13.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.005 issued rwts: total=2560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:13.005 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:13.005 filename0: (groupid=0, jobs=1): err= 0: pid=3768241: Wed Jul 24 22:32:08 2024 00:33:13.005 read: IOPS=217, BW=27.1MiB/s (28.5MB/s)(273MiB/10048msec) 00:33:13.005 slat (nsec): min=6449, max=51928, avg=16375.38, stdev=7176.25 00:33:13.005 clat (usec): min=5329, max=98424, avg=13775.02, stdev=11753.27 00:33:13.005 lat (usec): min=5337, max=98442, avg=13791.39, stdev=11753.44 00:33:13.005 clat percentiles (usec): 00:33:13.005 | 1.00th=[ 6325], 5.00th=[ 7439], 10.00th=[ 7963], 20.00th=[ 8848], 00:33:13.005 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:33:13.005 | 70.00th=[11600], 80.00th=[12256], 90.00th=[14353], 95.00th=[51643], 00:33:13.005 | 99.00th=[55313], 99.50th=[56886], 99.90th=[92799], 99.95th=[92799], 00:33:13.005 | 99.99th=[98042] 00:33:13.005 bw ( KiB/s): min=22272, max=36352, per=30.59%, avg=27904.00, stdev=3985.02, samples=20 00:33:13.005 iops : min= 174, max= 284, avg=218.00, stdev=31.13, samples=20 00:33:13.005 lat (msec) : 10=33.96%, 20=58.25%, 50=0.78%, 100=7.01% 00:33:13.005 cpu : usr=97.33%, sys=2.26%, ctx=14, majf=0, minf=134 00:33:13.005 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:13.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.005 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:13.005 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:13.005 filename0: (groupid=0, jobs=1): err= 0: pid=3768242: Wed Jul 24 22:32:08 2024 00:33:13.005 read: IOPS=240, BW=30.1MiB/s (31.5MB/s)(302MiB/10046msec) 00:33:13.005 slat (nsec): min=6408, max=46678, avg=16298.39, stdev=6788.69 00:33:13.005 clat (usec): min=5475, max=55366, avg=12427.24, stdev=9819.60 00:33:13.005 lat (usec): min=5483, max=55393, avg=12443.54, stdev=9820.14 00:33:13.005 clat percentiles (usec): 00:33:13.005 | 1.00th=[ 6063], 5.00th=[ 6849], 10.00th=[ 7635], 20.00th=[ 8455], 00:33:13.005 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10814], 00:33:13.005 | 70.00th=[11207], 80.00th=[11863], 90.00th=[12780], 95.00th=[50070], 00:33:13.005 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54789], 99.95th=[55313], 00:33:13.005 | 99.99th=[55313] 00:33:13.005 bw ( KiB/s): min=27136, max=36608, per=33.91%, avg=30927.60, stdev=2694.26, samples=20 00:33:13.005 iops : min= 212, max= 286, avg=241.60, stdev=21.07, samples=20 00:33:13.005 lat (msec) : 10=41.40%, 20=52.81%, 50=0.74%, 100=5.05% 00:33:13.005 cpu : usr=97.49%, sys=2.15%, ctx=14, majf=0, minf=149 00:33:13.005 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:13.005 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.005 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:13.005 issued rwts: total=2418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:13.005 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:13.005 00:33:13.005 Run status group 0 (all jobs): 00:33:13.005 READ: bw=89.1MiB/s (93.4MB/s), 27.1MiB/s-32.0MiB/s (28.5MB/s-33.5MB/s), io=895MiB (938MB), run=10006-10048msec 00:33:13.264 22:32:08 -- target/dif.sh@132 -- # destroy_subsystems 0 00:33:13.264 22:32:08 -- target/dif.sh@43 -- # local sub 00:33:13.264 22:32:08 -- target/dif.sh@45 -- # for sub in "$@" 00:33:13.264 22:32:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:13.264 22:32:08 -- target/dif.sh@36 -- # local sub_id=0 00:33:13.264 22:32:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:13.264 22:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.264 22:32:08 -- common/autotest_common.sh@10 -- # set +x 00:33:13.264 22:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.264 22:32:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:13.264 22:32:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:13.264 22:32:08 -- common/autotest_common.sh@10 -- # set +x 00:33:13.264 22:32:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:13.264 00:33:13.264 real 0m10.998s 00:33:13.264 user 0m36.024s 00:33:13.264 sys 0m1.079s 00:33:13.264 22:32:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:13.264 22:32:08 -- common/autotest_common.sh@10 -- # set +x 00:33:13.264 ************************************ 00:33:13.264 END TEST fio_dif_digest 00:33:13.264 ************************************ 00:33:13.264 22:32:08 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:33:13.264 22:32:08 -- target/dif.sh@147 -- # nvmftestfini 00:33:13.264 22:32:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:13.264 22:32:08 -- nvmf/common.sh@116 -- # sync 00:33:13.264 22:32:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:13.264 22:32:08 -- nvmf/common.sh@119 -- # set +e 00:33:13.264 22:32:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:13.264 22:32:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:13.264 rmmod nvme_tcp 00:33:13.264 rmmod nvme_fabrics 00:33:13.264 rmmod nvme_keyring 00:33:13.523 22:32:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:13.523 22:32:08 -- nvmf/common.sh@123 -- # set -e 00:33:13.523 22:32:08 -- nvmf/common.sh@124 -- # return 0 00:33:13.523 22:32:08 -- nvmf/common.sh@477 -- # '[' -n 3759736 ']' 00:33:13.523 22:32:08 -- nvmf/common.sh@478 -- # killprocess 3759736 00:33:13.523 22:32:08 -- common/autotest_common.sh@926 -- # '[' -z 3759736 ']' 00:33:13.523 22:32:08 -- common/autotest_common.sh@930 -- # kill -0 3759736 00:33:13.523 22:32:08 -- common/autotest_common.sh@931 -- # uname 00:33:13.523 22:32:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:13.523 22:32:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3759736 00:33:13.523 22:32:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:13.523 22:32:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:13.523 22:32:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3759736' 00:33:13.523 killing process with pid 3759736 00:33:13.523 22:32:08 -- common/autotest_common.sh@945 -- # kill 3759736 00:33:13.523 22:32:08 -- common/autotest_common.sh@950 -- # wait 3759736 00:33:13.523 22:32:08 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:33:13.523 22:32:08 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:16.807 Waiting for block devices as requested 00:33:16.807 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:16.807 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:16.807 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:16.807 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:16.807 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:16.807 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:16.807 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:16.807 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:16.807 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:17.066 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:17.066 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:17.066 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:17.066 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:17.325 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:17.325 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:17.325 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:17.583 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:17.583 22:32:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:17.583 22:32:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:17.583 22:32:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:17.583 22:32:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:17.583 22:32:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.583 22:32:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:17.583 22:32:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.483 22:32:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:33:19.483 00:33:19.483 real 1m12.425s 00:33:19.483 user 7m9.423s 00:33:19.483 sys 0m17.250s 00:33:19.483 22:32:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:19.483 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:33:19.483 ************************************ 00:33:19.483 END TEST nvmf_dif 00:33:19.483 ************************************ 00:33:19.483 22:32:14 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:19.483 22:32:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:19.483 22:32:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:19.483 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:33:19.742 ************************************ 00:33:19.742 START TEST nvmf_abort_qd_sizes 00:33:19.742 ************************************ 00:33:19.742 22:32:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:33:19.742 * Looking for test storage... 00:33:19.742 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:19.742 22:32:14 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.742 22:32:14 -- nvmf/common.sh@7 -- # uname -s 00:33:19.742 22:32:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.742 22:32:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.742 22:32:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.742 22:32:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.742 22:32:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.742 22:32:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.742 22:32:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.742 22:32:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.742 22:32:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.742 22:32:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.742 22:32:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:19.742 22:32:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:19.742 22:32:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.742 22:32:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.742 22:32:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.742 22:32:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.742 22:32:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.742 22:32:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.742 22:32:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.742 22:32:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.742 22:32:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.742 22:32:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.742 22:32:14 -- paths/export.sh@5 -- # export PATH 00:33:19.742 22:32:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.742 22:32:14 -- nvmf/common.sh@46 -- # : 0 00:33:19.743 22:32:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:33:19.743 22:32:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:33:19.743 22:32:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:33:19.743 22:32:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.743 22:32:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.743 22:32:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:33:19.743 22:32:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:33:19.743 22:32:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:33:19.743 22:32:14 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:33:19.743 22:32:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:33:19.743 22:32:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.743 22:32:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:33:19.743 22:32:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:33:19.743 22:32:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:33:19.743 22:32:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.743 22:32:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:19.743 22:32:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.743 22:32:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:33:19.743 22:32:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:33:19.743 22:32:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:33:19.743 22:32:14 -- common/autotest_common.sh@10 -- # set +x 00:33:25.011 22:32:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:33:25.011 22:32:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:33:25.011 22:32:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:33:25.011 22:32:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:33:25.012 22:32:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:33:25.012 22:32:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:33:25.012 22:32:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:33:25.012 22:32:19 -- nvmf/common.sh@294 -- # net_devs=() 00:33:25.012 22:32:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:33:25.012 22:32:19 -- nvmf/common.sh@295 -- # e810=() 00:33:25.012 22:32:19 -- nvmf/common.sh@295 -- # local -ga e810 00:33:25.012 22:32:19 -- nvmf/common.sh@296 -- # x722=() 00:33:25.012 22:32:19 -- nvmf/common.sh@296 -- # local -ga x722 00:33:25.012 22:32:19 -- nvmf/common.sh@297 -- # mlx=() 00:33:25.012 22:32:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:33:25.012 22:32:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.012 22:32:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:33:25.012 22:32:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:33:25.012 22:32:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:33:25.012 22:32:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:25.012 22:32:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:25.012 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:25.012 22:32:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:33:25.012 22:32:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:25.012 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:25.012 22:32:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:33:25.012 22:32:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:25.012 22:32:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.012 22:32:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:25.012 22:32:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.012 22:32:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:25.012 Found net devices under 0000:86:00.0: cvl_0_0 00:33:25.012 22:32:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.012 22:32:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:33:25.012 22:32:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.012 22:32:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:33:25.012 22:32:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.012 22:32:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:25.012 Found net devices under 0000:86:00.1: cvl_0_1 00:33:25.012 22:32:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.012 22:32:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:33:25.012 22:32:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:33:25.012 22:32:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:33:25.012 22:32:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:33:25.012 22:32:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.012 22:32:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.012 22:32:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.012 22:32:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:33:25.012 22:32:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.012 22:32:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.012 22:32:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:33:25.012 22:32:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.012 22:32:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.012 22:32:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:33:25.012 22:32:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:33:25.012 22:32:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.012 22:32:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.012 22:32:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.012 22:32:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.012 22:32:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:33:25.012 22:32:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.012 22:32:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.012 22:32:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.012 22:32:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:33:25.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:33:25.012 00:33:25.012 --- 10.0.0.2 ping statistics --- 00:33:25.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.012 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:33:25.012 22:32:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:33:25.012 00:33:25.012 --- 10.0.0.1 ping statistics --- 00:33:25.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.012 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:33:25.012 22:32:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.012 22:32:20 -- nvmf/common.sh@410 -- # return 0 00:33:25.012 22:32:20 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:33:25.012 22:32:20 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:27.545 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:33:27.545 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:33:28.482 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:33:28.482 22:32:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.482 22:32:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:33:28.482 22:32:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:33:28.482 22:32:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.482 22:32:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:33:28.482 22:32:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:33:28.482 22:32:23 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:33:28.482 22:32:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:33:28.482 22:32:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:33:28.482 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:33:28.482 22:32:23 -- nvmf/common.sh@469 -- # nvmfpid=3776093 00:33:28.482 22:32:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:33:28.482 22:32:23 -- nvmf/common.sh@470 -- # waitforlisten 3776093 00:33:28.482 22:32:23 -- common/autotest_common.sh@819 -- # '[' -z 3776093 ']' 00:33:28.482 22:32:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.482 22:32:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:33:28.482 22:32:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.482 22:32:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:33:28.482 22:32:23 -- common/autotest_common.sh@10 -- # set +x 00:33:28.482 [2024-07-24 22:32:23.581812] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 22.11.4 initialization... 00:33:28.482 [2024-07-24 22:32:23.581854] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.482 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.741 [2024-07-24 22:32:23.641328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:28.741 [2024-07-24 22:32:23.683920] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:33:28.741 [2024-07-24 22:32:23.684031] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.741 [2024-07-24 22:32:23.684040] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.741 [2024-07-24 22:32:23.684051] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.741 [2024-07-24 22:32:23.684094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.741 [2024-07-24 22:32:23.684216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.741 [2024-07-24 22:32:23.684300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:33:28.741 [2024-07-24 22:32:23.684301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.309 22:32:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:33:29.309 22:32:24 -- common/autotest_common.sh@852 -- # return 0 00:33:29.309 22:32:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:33:29.309 22:32:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:33:29.309 22:32:24 -- common/autotest_common.sh@10 -- # set +x 00:33:29.309 22:32:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.309 22:32:24 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:33:29.309 22:32:24 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:33:29.309 22:32:24 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:33:29.309 22:32:24 -- scripts/common.sh@311 -- # local bdf bdfs 00:33:29.309 22:32:24 -- scripts/common.sh@312 -- # local nvmes 00:33:29.309 22:32:24 -- scripts/common.sh@314 -- # [[ -n 0000:5e:00.0 ]] 00:33:29.568 22:32:24 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:33:29.568 22:32:24 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:33:29.568 22:32:24 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:33:29.568 22:32:24 -- scripts/common.sh@322 -- # uname -s 00:33:29.568 22:32:24 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:33:29.568 22:32:24 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:33:29.568 22:32:24 -- scripts/common.sh@327 -- # (( 1 )) 00:33:29.568 22:32:24 -- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0 00:33:29.568 22:32:24 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:33:29.568 22:32:24 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:5e:00.0 00:33:29.568 22:32:24 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:33:29.568 22:32:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:29.568 22:32:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:29.568 22:32:24 -- common/autotest_common.sh@10 -- # set +x 00:33:29.568 ************************************ 00:33:29.568 START TEST spdk_target_abort 00:33:29.568 ************************************ 00:33:29.568 22:32:24 -- common/autotest_common.sh@1104 -- # spdk_target 00:33:29.568 22:32:24 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:33:29.568 22:32:24 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:29.568 22:32:24 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:33:29.568 22:32:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:29.568 22:32:24 -- common/autotest_common.sh@10 -- # set +x 00:33:32.172 spdk_targetn1 00:33:32.172 22:32:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.172 22:32:27 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:32.172 22:32:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.172 22:32:27 -- common/autotest_common.sh@10 -- # set +x 00:33:32.172 [2024-07-24 22:32:27.286912] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.172 22:32:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.172 22:32:27 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:33:32.172 22:32:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.172 22:32:27 -- common/autotest_common.sh@10 -- # set +x 00:33:32.172 22:32:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.172 22:32:27 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:33:32.172 22:32:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.172 22:32:27 -- common/autotest_common.sh@10 -- # set +x 00:33:32.432 22:32:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:33:32.432 22:32:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:32.432 22:32:27 -- common/autotest_common.sh@10 -- # set +x 00:33:32.432 [2024-07-24 22:32:27.315928] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.432 22:32:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:32.432 22:32:27 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:32.432 EAL: No free 2048 kB hugepages reported on node 1 00:33:35.726 Initializing NVMe Controllers 00:33:35.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:35.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:35.726 Initialization complete. Launching workers. 00:33:35.726 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5038, failed: 0 00:33:35.726 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1483, failed to submit 3555 00:33:35.726 success 895, unsuccess 588, failed 0 00:33:35.726 22:32:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:35.726 22:32:30 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:35.726 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.020 Initializing NVMe Controllers 00:33:39.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:39.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:39.020 Initialization complete. Launching workers. 00:33:39.020 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8583, failed: 0 00:33:39.020 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1222, failed to submit 7361 00:33:39.020 success 342, unsuccess 880, failed 0 00:33:39.020 22:32:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:39.020 22:32:33 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:33:39.020 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.313 Initializing NVMe Controllers 00:33:42.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:33:42.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:33:42.313 Initialization complete. Launching workers. 00:33:42.313 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 33770, failed: 0 00:33:42.313 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2813, failed to submit 30957 00:33:42.313 success 737, unsuccess 2076, failed 0 00:33:42.313 22:32:36 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:33:42.313 22:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.313 22:32:36 -- common/autotest_common.sh@10 -- # set +x 00:33:42.313 22:32:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:42.313 22:32:36 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:33:42.313 22:32:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:33:42.313 22:32:36 -- common/autotest_common.sh@10 -- # set +x 00:33:43.257 22:32:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:33:43.257 22:32:38 -- target/abort_qd_sizes.sh@62 -- # killprocess 3776093 00:33:43.257 22:32:38 -- common/autotest_common.sh@926 -- # '[' -z 3776093 ']' 00:33:43.257 22:32:38 -- common/autotest_common.sh@930 -- # kill -0 3776093 00:33:43.257 22:32:38 -- common/autotest_common.sh@931 -- # uname 00:33:43.257 22:32:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:33:43.257 22:32:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3776093 00:33:43.257 22:32:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:33:43.257 22:32:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:33:43.257 22:32:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3776093' 00:33:43.257 killing process with pid 3776093 00:33:43.257 22:32:38 -- common/autotest_common.sh@945 -- # kill 3776093 00:33:43.257 22:32:38 -- common/autotest_common.sh@950 -- # wait 3776093 00:33:43.517 00:33:43.517 real 0m14.031s 00:33:43.517 user 0m56.112s 00:33:43.517 sys 0m2.114s 00:33:43.517 22:32:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.517 22:32:38 -- common/autotest_common.sh@10 -- # set +x 00:33:43.517 ************************************ 00:33:43.517 END TEST spdk_target_abort 00:33:43.517 ************************************ 00:33:43.517 22:32:38 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:33:43.517 22:32:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:33:43.517 22:32:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:33:43.517 22:32:38 -- common/autotest_common.sh@10 -- # set +x 00:33:43.517 ************************************ 00:33:43.517 START TEST kernel_target_abort 00:33:43.517 ************************************ 00:33:43.517 22:32:38 -- common/autotest_common.sh@1104 -- # kernel_target 00:33:43.517 22:32:38 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:33:43.517 22:32:38 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:33:43.517 22:32:38 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:33:43.517 22:32:38 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:33:43.517 22:32:38 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:33:43.517 22:32:38 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:43.517 22:32:38 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:33:43.517 22:32:38 -- nvmf/common.sh@627 -- # local block nvme 00:33:43.517 22:32:38 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:33:43.517 22:32:38 -- nvmf/common.sh@630 -- # modprobe nvmet 00:33:43.517 22:32:38 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:33:43.517 22:32:38 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:46.055 Waiting for block devices as requested 00:33:46.055 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:33:46.055 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:46.315 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:46.315 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:46.315 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:46.315 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:46.575 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:46.575 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:46.575 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:46.575 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:33:46.834 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:33:46.834 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:33:46.834 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:33:47.094 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:33:47.094 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:33:47.094 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:33:47.094 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:33:47.354 22:32:42 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:33:47.354 22:32:42 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:47.354 22:32:42 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:33:47.354 22:32:42 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:33:47.354 22:32:42 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:33:47.354 No valid GPT data, bailing 00:33:47.354 22:32:42 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:47.354 22:32:42 -- scripts/common.sh@393 -- # pt= 00:33:47.354 22:32:42 -- scripts/common.sh@394 -- # return 1 00:33:47.354 22:32:42 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:33:47.354 22:32:42 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:33:47.354 22:32:42 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:33:47.354 22:32:42 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:47.354 22:32:42 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:47.354 22:32:42 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:33:47.354 22:32:42 -- nvmf/common.sh@654 -- # echo 1 00:33:47.354 22:32:42 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:33:47.354 22:32:42 -- nvmf/common.sh@656 -- # echo 1 00:33:47.354 22:32:42 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:33:47.354 22:32:42 -- nvmf/common.sh@663 -- # echo tcp 00:33:47.354 22:32:42 -- nvmf/common.sh@664 -- # echo 4420 00:33:47.354 22:32:42 -- nvmf/common.sh@665 -- # echo ipv4 00:33:47.354 22:32:42 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:47.354 22:32:42 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:33:47.354 00:33:47.354 Discovery Log Number of Records 2, Generation counter 2 00:33:47.354 =====Discovery Log Entry 0====== 00:33:47.354 trtype: tcp 00:33:47.354 adrfam: ipv4 00:33:47.354 subtype: current discovery subsystem 00:33:47.354 treq: not specified, sq flow control disable supported 00:33:47.354 portid: 1 00:33:47.354 trsvcid: 4420 00:33:47.354 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:47.354 traddr: 10.0.0.1 00:33:47.354 eflags: none 00:33:47.354 sectype: none 00:33:47.354 =====Discovery Log Entry 1====== 00:33:47.354 trtype: tcp 00:33:47.354 adrfam: ipv4 00:33:47.354 subtype: nvme subsystem 00:33:47.354 treq: not specified, sq flow control disable supported 00:33:47.354 portid: 1 00:33:47.354 trsvcid: 4420 00:33:47.354 subnqn: kernel_target 00:33:47.354 traddr: 10.0.0.1 00:33:47.354 eflags: none 00:33:47.354 sectype: none 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:47.354 22:32:42 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:47.354 EAL: No free 2048 kB hugepages reported on node 1 00:33:50.645 Initializing NVMe Controllers 00:33:50.645 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:33:50.645 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:33:50.645 Initialization complete. Launching workers. 00:33:50.645 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 29948, failed: 0 00:33:50.645 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29948, failed to submit 0 00:33:50.645 success 0, unsuccess 29948, failed 0 00:33:50.645 22:32:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:50.645 22:32:45 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:50.645 EAL: No free 2048 kB hugepages reported on node 1 00:33:53.938 Initializing NVMe Controllers 00:33:53.938 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:33:53.938 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:33:53.938 Initialization complete. Launching workers. 00:33:53.938 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 63373, failed: 0 00:33:53.938 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 16006, failed to submit 47367 00:33:53.938 success 0, unsuccess 16006, failed 0 00:33:53.938 22:32:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:53.939 22:32:48 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:33:53.939 EAL: No free 2048 kB hugepages reported on node 1 00:33:56.506 Initializing NVMe Controllers 00:33:56.506 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:33:56.506 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:33:56.506 Initialization complete. Launching workers. 00:33:56.507 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 61885, failed: 0 00:33:56.507 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 15470, failed to submit 46415 00:33:56.507 success 0, unsuccess 15470, failed 0 00:33:56.507 22:32:51 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:33:56.507 22:32:51 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:33:56.507 22:32:51 -- nvmf/common.sh@677 -- # echo 0 00:33:56.507 22:32:51 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:33:56.507 22:32:51 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:33:56.507 22:32:51 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:56.507 22:32:51 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:33:56.507 22:32:51 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:33:56.507 22:32:51 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:33:56.507 00:33:56.507 real 0m13.068s 00:33:56.507 user 0m3.399s 00:33:56.507 sys 0m3.619s 00:33:56.507 22:32:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:56.507 22:32:51 -- common/autotest_common.sh@10 -- # set +x 00:33:56.507 ************************************ 00:33:56.507 END TEST kernel_target_abort 00:33:56.507 ************************************ 00:33:56.507 22:32:51 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:33:56.507 22:32:51 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:33:56.507 22:32:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:33:56.507 22:32:51 -- nvmf/common.sh@116 -- # sync 00:33:56.507 22:32:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:33:56.507 22:32:51 -- nvmf/common.sh@119 -- # set +e 00:33:56.507 22:32:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:33:56.507 22:32:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:33:56.766 rmmod nvme_tcp 00:33:56.766 rmmod nvme_fabrics 00:33:56.766 rmmod nvme_keyring 00:33:56.766 22:32:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:33:56.766 22:32:51 -- nvmf/common.sh@123 -- # set -e 00:33:56.766 22:32:51 -- nvmf/common.sh@124 -- # return 0 00:33:56.766 22:32:51 -- nvmf/common.sh@477 -- # '[' -n 3776093 ']' 00:33:56.766 22:32:51 -- nvmf/common.sh@478 -- # killprocess 3776093 00:33:56.766 22:32:51 -- common/autotest_common.sh@926 -- # '[' -z 3776093 ']' 00:33:56.766 22:32:51 -- common/autotest_common.sh@930 -- # kill -0 3776093 00:33:56.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3776093) - No such process 00:33:56.766 22:32:51 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3776093 is not found' 00:33:56.766 Process with pid 3776093 is not found 00:33:56.766 22:32:51 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:33:56.766 22:32:51 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:59.304 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:33:59.304 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:33:59.304 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:33:59.304 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:33:59.304 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:33:59.304 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:33:59.304 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:33:59.304 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:33:59.563 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:33:59.563 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:33:59.563 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:33:59.563 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:33:59.563 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:33:59.563 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:33:59.563 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:33:59.563 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:33:59.563 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:33:59.563 22:32:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:33:59.563 22:32:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:33:59.563 22:32:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:59.563 22:32:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:33:59.563 22:32:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.563 22:32:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:59.563 22:32:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:02.102 22:32:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:34:02.102 00:34:02.102 real 0m42.052s 00:34:02.102 user 1m3.266s 00:34:02.102 sys 0m13.446s 00:34:02.102 22:32:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:02.102 22:32:56 -- common/autotest_common.sh@10 -- # set +x 00:34:02.102 ************************************ 00:34:02.102 END TEST nvmf_abort_qd_sizes 00:34:02.102 ************************************ 00:34:02.102 22:32:56 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:02.102 22:32:56 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:02.102 22:32:56 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:02.102 22:32:56 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:02.102 22:32:56 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:34:02.102 22:32:56 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:34:02.102 22:32:56 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:34:02.102 22:32:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:34:02.102 22:32:56 -- common/autotest_common.sh@10 -- # set +x 00:34:02.102 22:32:56 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:34:02.102 22:32:56 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:34:02.102 22:32:56 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:34:02.102 22:32:56 -- common/autotest_common.sh@10 -- # set +x 00:34:06.298 INFO: APP EXITING 00:34:06.298 INFO: killing all VMs 00:34:06.298 INFO: killing vhost app 00:34:06.298 INFO: EXIT DONE 00:34:08.836 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:34:08.836 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:34:08.836 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:34:11.375 Cleaning 00:34:11.375 Removing: /var/run/dpdk/spdk0/config 00:34:11.375 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:11.375 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:11.375 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:11.375 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:11.375 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:34:11.375 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:34:11.375 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:34:11.375 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:34:11.375 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:11.375 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:11.375 Removing: /var/run/dpdk/spdk1/config 00:34:11.375 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:34:11.375 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:34:11.375 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:34:11.375 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:34:11.375 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:34:11.375 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:34:11.375 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:34:11.375 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:34:11.375 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:34:11.375 Removing: /var/run/dpdk/spdk1/hugepage_info 00:34:11.375 Removing: /var/run/dpdk/spdk1/mp_socket 00:34:11.375 Removing: /var/run/dpdk/spdk2/config 00:34:11.375 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:34:11.375 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:34:11.375 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:34:11.375 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:34:11.375 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:34:11.375 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:34:11.375 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:34:11.375 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:34:11.375 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:34:11.375 Removing: /var/run/dpdk/spdk2/hugepage_info 00:34:11.375 Removing: /var/run/dpdk/spdk3/config 00:34:11.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:34:11.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:34:11.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:34:11.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:34:11.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:34:11.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:34:11.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:34:11.375 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:34:11.375 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:34:11.375 Removing: /var/run/dpdk/spdk3/hugepage_info 00:34:11.375 Removing: /var/run/dpdk/spdk4/config 00:34:11.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:34:11.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:34:11.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:34:11.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:34:11.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:34:11.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:34:11.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:34:11.375 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:34:11.375 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:34:11.375 Removing: /var/run/dpdk/spdk4/hugepage_info 00:34:11.375 Removing: /dev/shm/bdev_svc_trace.1 00:34:11.375 Removing: /dev/shm/nvmf_trace.0 00:34:11.375 Removing: /dev/shm/spdk_tgt_trace.pid3371063 00:34:11.375 Removing: /var/run/dpdk/spdk0 00:34:11.375 Removing: /var/run/dpdk/spdk1 00:34:11.375 Removing: /var/run/dpdk/spdk2 00:34:11.375 Removing: /var/run/dpdk/spdk3 00:34:11.375 Removing: /var/run/dpdk/spdk4 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3368907 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3369996 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3371063 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3371735 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3373263 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3374548 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3374837 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3375119 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3375424 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3375714 00:34:11.375 Removing: /var/run/dpdk/spdk_pid3375967 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3376221 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3376496 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3377244 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3380174 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3380535 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3380802 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3380822 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3381317 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3381547 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3381857 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3382061 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3382320 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3382418 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3382597 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3382829 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3383187 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3383416 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3383704 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3383977 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3384019 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3384269 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3384513 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3384761 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3384953 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3385159 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3385345 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3385555 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3385749 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3386004 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3386236 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3386485 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3386719 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3386975 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3387207 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3387460 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3387695 00:34:11.635 Removing: /var/run/dpdk/spdk_pid3387950 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3388185 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3388433 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3388673 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3388920 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3389117 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3389329 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3389514 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3389712 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3389910 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3390167 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3390401 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3390650 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3390884 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3391137 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3391375 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3391623 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3391861 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3392117 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3392354 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3392608 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3392850 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3393103 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3393335 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3393591 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3393653 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3393952 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3397578 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3479447 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3484252 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3494425 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3499638 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3504122 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3504827 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3510779 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3510865 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3511599 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3512505 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3513433 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3513915 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3514078 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3514374 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3514385 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3514387 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3515321 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3516248 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3517139 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3517652 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3517658 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3517899 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3519158 00:34:11.636 Removing: /var/run/dpdk/spdk_pid3520237 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3529087 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3529359 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3533653 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3539567 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3542198 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3552463 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3561419 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3563280 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3564212 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3581464 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3585275 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3589578 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3591215 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3593141 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3593318 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3593559 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3593798 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3594366 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3596356 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3597249 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3597803 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3603387 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3609022 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3613903 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3650475 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3654322 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3660911 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3662233 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3663792 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3667892 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3672031 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3679567 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3679572 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3684093 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3684328 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3684548 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3684894 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3685032 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3686404 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3688096 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3689729 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3691357 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3693051 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3694827 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3701255 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3701835 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3703613 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3704511 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3710210 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3713009 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3718455 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3724057 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3729782 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3730264 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3730749 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3731319 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3731973 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3732680 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3733273 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3733876 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3738159 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3738401 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3744825 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3745081 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3747322 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3754919 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3754924 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3760011 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3761917 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3763807 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3764998 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3766843 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3768008 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3776764 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3777230 00:34:11.896 Removing: /var/run/dpdk/spdk_pid3777763 00:34:12.155 Removing: /var/run/dpdk/spdk_pid3779999 00:34:12.155 Removing: /var/run/dpdk/spdk_pid3780533 00:34:12.155 Removing: /var/run/dpdk/spdk_pid3781081 00:34:12.155 Clean 00:34:12.155 killing process with pid 3323792 00:34:20.275 killing process with pid 3323789 00:34:20.275 killing process with pid 3323791 00:34:20.275 killing process with pid 3323790 00:34:20.275 22:33:14 -- common/autotest_common.sh@1436 -- # return 0 00:34:20.275 22:33:14 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:34:20.275 22:33:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:20.275 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:20.275 22:33:14 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:34:20.275 22:33:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:34:20.275 22:33:14 -- common/autotest_common.sh@10 -- # set +x 00:34:20.275 22:33:14 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:20.275 22:33:14 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:34:20.275 22:33:14 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:34:20.275 22:33:14 -- spdk/autotest.sh@394 -- # hash lcov 00:34:20.275 22:33:14 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:34:20.275 22:33:14 -- spdk/autotest.sh@396 -- # hostname 00:34:20.275 22:33:14 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:34:20.275 geninfo: WARNING: invalid characters removed from testname! 00:34:38.425 22:33:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:40.958 22:33:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:42.335 22:33:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:44.239 22:33:38 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:45.616 22:33:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:47.520 22:33:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:49.423 22:33:44 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:49.423 22:33:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.423 22:33:44 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:49.423 22:33:44 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.423 22:33:44 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.423 22:33:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.423 22:33:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.423 22:33:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.423 22:33:44 -- paths/export.sh@5 -- $ export PATH 00:34:49.423 22:33:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.423 22:33:44 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:49.423 22:33:44 -- common/autobuild_common.sh@438 -- $ date +%s 00:34:49.423 22:33:44 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721853224.XXXXXX 00:34:49.423 22:33:44 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721853224.fspcwC 00:34:49.423 22:33:44 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:34:49.423 22:33:44 -- common/autobuild_common.sh@444 -- $ '[' -n v22.11.4 ']' 00:34:49.423 22:33:44 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:34:49.423 22:33:44 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:34:49.423 22:33:44 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:49.423 22:33:44 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:49.423 22:33:44 -- common/autobuild_common.sh@454 -- $ get_config_params 00:34:49.423 22:33:44 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:34:49.423 22:33:44 -- common/autotest_common.sh@10 -- $ set +x 00:34:49.423 22:33:44 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:34:49.423 22:33:44 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:34:49.423 22:33:44 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:49.423 22:33:44 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:49.423 22:33:44 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:49.423 22:33:44 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:49.423 22:33:44 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:49.423 22:33:44 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:49.423 22:33:44 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:49.423 22:33:44 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:49.423 22:33:44 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:49.423 + [[ -n 3269682 ]] 00:34:49.423 + sudo kill 3269682 00:34:49.432 [Pipeline] } 00:34:49.444 [Pipeline] // stage 00:34:49.446 [Pipeline] } 00:34:49.455 [Pipeline] // timeout 00:34:49.459 [Pipeline] } 00:34:49.472 [Pipeline] // catchError 00:34:49.477 [Pipeline] } 00:34:49.492 [Pipeline] // wrap 00:34:49.498 [Pipeline] } 00:34:49.511 [Pipeline] // catchError 00:34:49.517 [Pipeline] stage 00:34:49.518 [Pipeline] { (Epilogue) 00:34:49.527 [Pipeline] catchError 00:34:49.527 [Pipeline] { 00:34:49.536 [Pipeline] echo 00:34:49.536 Cleanup processes 00:34:49.539 [Pipeline] sh 00:34:49.822 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:49.822 3794557 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:49.834 [Pipeline] sh 00:34:50.118 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:50.118 ++ grep -v 'sudo pgrep' 00:34:50.118 ++ awk '{print $1}' 00:34:50.118 + sudo kill -9 00:34:50.118 + true 00:34:50.131 [Pipeline] sh 00:34:50.480 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:02.706 [Pipeline] sh 00:35:02.989 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:02.989 Artifacts sizes are good 00:35:03.108 [Pipeline] archiveArtifacts 00:35:03.116 Archiving artifacts 00:35:03.320 [Pipeline] sh 00:35:03.605 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:35:03.618 [Pipeline] cleanWs 00:35:03.627 [WS-CLEANUP] Deleting project workspace... 00:35:03.627 [WS-CLEANUP] Deferred wipeout is used... 00:35:03.634 [WS-CLEANUP] done 00:35:03.636 [Pipeline] } 00:35:03.654 [Pipeline] // catchError 00:35:03.667 [Pipeline] sh 00:35:03.951 + logger -p user.info -t JENKINS-CI 00:35:03.959 [Pipeline] } 00:35:03.972 [Pipeline] // stage 00:35:03.976 [Pipeline] } 00:35:03.988 [Pipeline] // node 00:35:03.992 [Pipeline] End of Pipeline 00:35:04.014 Finished: SUCCESS